Jump to section

Understanding APIs

Copy URL

APIs are software mediators, allowing software to communicate and integrate efficiently. They can be implemented many different ways—such as REST, SOAP, GraphQL—and can be augmented with management tools, gateways, and service meshes. When used effectively, businesses can use APIs to streamline development, innovate through public sharing, and monetize data while prioritizing control and security.

Discover Red Hat OpenShift API Management

Increase developer productivity and deliver new applications faster with an API-first approach.

Growing your business on the web and on mobile relies on the ability to communicate, integrate, and connect your products and services with different software programs. Application programming interfaces (APIs) are the key to doing this well—with the most flexibility, simplicity, safety, and control.

API stands for application programming interface, which is a set of tools, definitions, and protocols for building and integrating application software. It’s the stuff that lets your products and services communicate with other products and services without having to constantly build new connectivity infrastructure.

APIs help organizations share resources and information while maintaining security, control, and authentication—determining who gets access to what. They can be private (for internal use only), partnered (shared with specific partners to provide additional revenue streams), or public (allowing third parties to develop apps that interact with your API to foster innovation). And you don’t have to know the specifics of caching—how your resource is retrieved or where it comes from. 

APIs let your product or service communicate with other products and services without having to know how they’re implemented. This can simplify app development, saving time and money. When designing new tools and products—or managing existing ones—APIs give flexibility; simplify design, administration, and use; and provide opportunities for innovation.

Choosing to share your APIs has several benefits, including:

  • Creating new revenue channels or extending existing ones.
  • Expanding the reach of your brand.
  • Facilitating open innovation or improved efficiency through external development and collaboration.

Think of APIs as mediators between the users or clients and the resources or web services they want to get. If you want to interact with a computer or system to retrieve information or perform a function, an API helps you communicate what you want to that system so it can understand and fulfill the request. 

Sometimes referred to as a contract between an information provider and an information user, APIs establish the content required from the consumer (the call) and the content required by the producer (the response). If party 1 sends a remote request structured a particular way, this is how party 2’s software will respond. For example, the API design for a weather service could specify that the user supply a zip code and that the producer reply with a 2-part answer, the first being the high temperature, and the second being the low.  

An extraordinarily brief history of APIs

APIs emerged in the early days of computing, well before the personal computer. At the time, an API was typically used as a library for operating systems. The API was almost always local to the systems on which it operated, although it sometimes passed messages between mainframes. After nearly 30 years, APIs broke out of their local environments. By the early 2000s, they were becoming an important technology for the remote integration of data.

APIs and cloud-native applications

Because APIs simplify how developers integrate new application components into an existing architecture, they help business and IT teams collaborate. Cloud-native application development is an identifiable way to increase development speed, and it relies on connecting a microservices application architecture through APIs.

APIs can also simplify how infrastructure connects through cloud-native app development, while also allowing you to share data with customers and other external users. Public APIs represent unique business value because they can simplify and expand how you connect with your partners, as well as potentially monetize your data (the Google Maps API is a popular example).

For example, imagine a book-distributing company. The book distributor could give its customers a cloud app that lets bookstore clerks check book availability with the distributor. This app could be expensive to develop, limited by platform, and require long development times and ongoing maintenance.

Alternatively, the book distributor could provide an API to check stock availability. There are several benefits to this approach:

  • Letting customers access data via an API helps them aggregate information about their inventory in a single place.
  • The book distributor can make changes to its internal systems without impacting customers, so long as the behavior of the API doesn’t change.
  • With a publicly available API, developers working for the book distributor, book sellers or third parties could develop an app to help customers find the books they’re looking for. This could result in higher sales or other business opportunities.

In short, APIs let you open up access to your resources while maintaining security and control. How you open access and to whom is up to you. API security is all about good API management, which includes the use of an API gateway. Connecting to APIs, and creating applications that consume the data or functionality exposed by APIs, can be done with a distributed integration platform that connects everything—including legacy systems, and the Internet of Things (IoT).

API release policies

Private

The API is only for use internally. This gives companies the most control over their API.

Partner

The API is shared with specific business partners. This can provide additional revenue streams without compromising quality.

Public

The API is available to everyone. This allows third parties to develop apps that interact with your API and can be a source for innovation.

Innovating with APIs

Exposing your APIs to partners or the public can:

  • Create new revenue channels or extend existing ones.
  • Expand the reach of your brand.
  • Facilitate open innovation or improved efficiency through external development and collaboration.

Sounds great, right? But how can APIs do all that? Let’s return to the example of the book distributing company.

Suppose one of the company's partners develops an app that helps people find books on bookstore shelves. This improved experience brings more shoppers to the bookstore—the distributor's customer—and extends an existing revenue channel.

Maybe a third party uses a public API to develop an app that lets people buy books directly from the distributor, instead of from a store. This opens a new revenue channel for the book distributor.

Sharing APIs―with select partners or the whole world―can have positive effects. Each partnership extends your brand recognition beyond your company’s marketing efforts. Opening technology to everyone, as with a public API, encourages developers to build an ecosystem of apps around your API. More people using your technology means more people are likely to do business with you.

Making technology public can lead to novel and unexpected outcomes. These outcomes sometimes disrupt entire industries. For our book distributing company, new firms―a book borrowing service, for example―could fundamentally change the way they do business. Partner and public APIs help you use the creative efforts of a community larger than your team of internal developers. New ideas can come from anywhere, and companies need to be aware of changes in their market and ready to act on them. APIs can help.

Remote APIs

Remote APIs are designed to interact through a communications network. By remote, we mean that the resources being manipulated by the API are somewhere outside the computer making the request. Because the most widely used communications network is the internet, most APIs are designed based on web standards. Not all remote APIs are web APIs, but it’s fair to assume that web APIs are remote.

Web APIs typically use HTTP for request messages and provide a definition of the structure of response messages. These response messages usually take the form of an XML or JSON file. Both XML and JSON are preferred formats because they present data in a way that’s easy for other apps to manipulate.

SOA vs. microservices architecture

The 2 architectural approaches that use remote APIs most are service-oriented architecture (SOA) and microservices architecture. SOA, the oldest of the 2 approaches, began as an improvement to monolithic apps. Whereas a single monolithic app does everything, some functions can be supplied by different apps that are loosely coupled through an integration pattern, like an enterprise service bus (ESB).

While SOA is, in most respects, simpler than a monolithic architecture, it carries a risk of cascading changes throughout the environment if component interactions are not clearly understood. This additional complexity reintroduces some of the problems SOA sought to remedy.

Microservices architectures are similar to SOA patterns in their use of specialized, loosely coupled services. But they go even further in breaking down traditional architectures. The services within the microservices architecture use a common messaging framework, like RESTful APIs. They use RESTful APIs to communicate with each other without difficult data conversion transactions or additional integration layers. Using RESTful APIs allows, and even encourages, faster delivery of new features and updates. Each service is discrete. One service can be replaced, enhanced, or dropped without affecting any other service in the architecture. This lightweight architecture helps optimize distributed or cloud resources and supports dynamic scalability for individual services.

APIs vs. webhooks

A webhook is an HTTP-based callback function that allows lightweight, event-driven communication between 2 APIs. Webhooks are used by a wide variety of web apps to receive small amounts of data from other apps, but webhooks can also be used to trigger automation workflows in GitOps environments.

Webhooks are often referred to as reverse APIs or push APIs, because they put the responsibility of communication on the server, rather than the client. Instead of the client sending HTTP requests—asking for data until the server responds—the server sends the client a single HTTP POST request as soon as the data is available. Despite their nicknames, webhooks are not APIs; they work together. An application must have an API to use a webhook.

API security

API security best practices include the use of tokens, encryption and signatures, quotas and throttling, and an API gateway. Most importantly, though, API security relies on good API management.

Simple Object Access Protocol (SOAP) and Representational State Transfer (REST) are 2 efforts that have helped make APIs simpler in their design and more useful in their implementation. As web APIs became more popular, SOAP was developed to help standardize message formats and requests—it is a protocol specification that makes it easier for apps in different environments or written in different languages to communicate. 

Both define how to build APIs. REST is a set of architectural principles, while SOAP is an official protocol maintained by the World Wide Web Consortium (W3C). SOAP is a protocol, but REST is not.

Typically, an API will adhere to either REST or SOAP, depending on the use case and preferences of the developer. Many legacy systems may still adhere to SOAP, while REST came later and is often viewed as a faster alternative in web-based scenarios. REST is a set of guidelines that offers flexible implementation, whereas SOAP is a protocol with specific requirements like XML messaging.

In recent years, the OpenAPI specification has emerged as a common standard for defining REST APIs. OpenAPI establishes a language-agnostic way for developers to build REST API interfaces so that users can understand them with minimal guesswork.

REST: representational state transfer

REST APIs are lightweight, making them ideal for newer contexts like the Internet of Things (IoT), mobile application development, and serverless computing. SOAP web services offer built-in security and transaction compliance that align with many enterprise needs, but that also makes them heavier. Additionally, many public APIs, like the Google Maps API, follow the REST guidelines.

Any web API that adheres to the REST architectural constraints is called a RESTful API. As defined in Roy Fielding’s dissertation “Architectural Styles and the Design of Network-based Software Architectures,” APIs are RESTful as long as they comply with the 6 guiding constraints of a RESTful system:

  • Client-server architecture: REST architecture is composed of clients, servers, and resources, and it handles requests through HTTP.
  • Statelessness: No client content is stored on the server between requests. Information about the session state is, instead, held with the client.
  • Cacheability: Caching can eliminate the need for some client-server interactions.
  • Layered system: Client-server interactions can be mediated by additional layers. These layers could offer additional features like load balancing, shared caches, or security.
  • Code on demand (optional): Servers can extend the functionality of a client by transferring executable code.
  • Uniform interface: This constraint is core to the design of RESTful APIs and includes 4 facets:
    • Resource identification in requests: Resources are identified in requests and are separate from the representations returned to the client.
    • Resource manipulation through representations: Clients receive files that represent resources. These representations must have enough information to allow modification or deletion.
    • Self-descriptive messages: Each message returned to a client contains enough information to describe how the client should process the information.
    • Hypermedia as the engine of application state: After accessing a resource, the REST client should be able to discover through hyperlinks all other actions that are currently available.

These constraints may seem like a lot but they’re much simpler than a prescribed protocol, which is why RESTful APIs are becoming more prevalent than SOAP. As a set of architectural principles attuned to the needs of lightweight web services and mobile applications, RESTful APIs leave the implementation of these recommendations to developers. In comparison, SOAP maintains specific requirements like XML messaging, and built-in security and transaction compliance that make it slower and heavier. 

How REST APIs work

When a request for data is sent to a REST API, it’s usually done through hypertext transfer protocol (commonly referred to as HTTP). Once a request is received, APIs designed for REST (called RESTful APIs or RESTful web services) can return messages in a variety of formats: HTML, XML, plain text, and JSON. JSON (JavaScript object notation) is favored as a message format because it can be read by any programming language (despite the name), is human- and machine-readable, and is lightweight. In this way, RESTful APIs are more flexible and can be easier to set up.

Headers and parameters are also important in the HTTP methods of a RESTful API HTTP request, as they contain important identifier information as to the request's metadata, authorization, uniform resource identifier (URI), caching, cookies, and more. There are request headers and response headers, each with their own HTTP connection information and status codes.

SOAP: simple object access protocol

APIs designed with SOAP use XML for their message format and receive requests through HTTP or SMTP. SOAP makes it easier for apps running in different environments or written in different languages to share information.

SOAP is a standard protocol that was first designed so that applications built with different languages and on different platforms could communicate. Because it is a protocol, it imposes built-in rules that increase its complexity and overhead, which can lead to longer page load times. However, these standards also offer built-in compliances that can make it preferable for enterprise scenarios. The built-in compliance standards include security, atomicity, consistency, isolation, and durability (ACID), which is a set of properties for ensuring reliable database transactions.

Common web service specifications include:

  • Web services security (WS-security): Standardizes how messages are secured and transferred through unique identifiers called tokens.
  • WS-ReliableMessaging: Standardizes error handling between messages transferred across unreliable IT infrastructure.
  • Web services addressing (WS-addressing): Packages routing information as metadata within SOAP headers, instead of maintaining such information deeper within the network.
  • Web services description language (WSDL): Describes what a web service does, and where that service begins and ends.

How SOAP APIs work

When a request for data is sent to a SOAP API, it can be handled through any of the application layer protocols: HTTP (for web browsers), SMTP (for email), TCP, and others. However, once a request is received, return SOAP messages must be returned as XML documents—a markup language that is both human- and machine-readable. A completed request to a SOAP API is not cacheable by a browser, so it cannot be accessed later without resending to the API.

GraphQL

GraphQL is a query language and server-side runtime for APIs that prioritizes giving clients exactly the data they request and no more. GraphQL is designed to make APIs fast, flexible, and developer-friendly. As an alternative to REST, GraphQL lets developers construct requests that pull data from multiple data sources in a single API call.

GraphQL is designed to make APIs fast, flexible, and developer-friendly. It can even be deployed within an integrated development environment (IDE) known as GraphiQL. As an alternative to REST, GraphQL lets developers construct requests that pull data from multiple data sources in a single API call. 

Additionally, GraphQL gives API maintainers the flexibility to add or deprecate fields without impacting existing queries. Developers can build APIs with whatever methods they prefer, and the GraphQL specification will ensure they function in predictable ways to clients.

Common GraphQL terms

API developers use GraphQL to create a schema to describe all the possible data that clients can query through that service. A GraphQL schema is made up of object types, which define which kind of object you can request and what fields it has. As queries come in, GraphQL validates the queries against the schema. GraphQL then executes the validated queries. The API developer attaches each field in a schema to a function called a resolver. During execution, the resolver is called to produce the value.

Apart from defining and validating syntax for API queries (outlined in the graphql-spec repository), GraphQL leaves most other decisions to the API designer. GraphQL does not provide any direction for how to store data or what programming language to use—developers can use PHP (graphql-php), Scala (Sangria), Python (Graphene Python), Ruby (graphql-ruby), JavaScript (graphql.js), and more. GraphQL offers no requirements for the network, authorization, or pagination.

From the point of view of the client, the most common GraphQL operations are likely to be queries and mutations. If we were to think about them in terms of the create, read, update and delete (CRUD) model, a query would be equivalent to read. All the others (create, update, and delete) are handled by mutations.

GraphQL advantages & disadvantages

Thinking about trying GraphQL in a business or enterprise environment? It comes with both pros and cons.

Advantages include:

  • Setting a single source of truth in a GraphQL application. It offers an organization a way to federate its entire API.
  • Handling calls in a single round trip. Clients get what they request with no overfetching.
  • Strongly defined data types reduce miscommunication between the client and the server.
  • Introspective output that allows a client to request a list of data types available—ideal for auto-generating documentation.
  • Letting APIs evolve without breaking existing queries.
  • Many open source GraphQL extensions are available to offer features not available with REST APIs.
  • No specific application architecture, which means it can be introduced on top of an existing REST API and can work with existing API management tools.

Disadvantages include:

  • A learning curve for developers familiar with REST APIs.
  • A shift toward server side data queries, which adds complexity for server developers.
  • Different API management strategies than REST APIs, particularly when considering rate limits and pricing.
  • Complex caching.
  • Writing maintainable GraphQL schema, which adds complexity to API maintainers.

An example GraphQL query

The best way to appreciate GraphQL is to look at some sample queries and responses. Let’s look at 3 examples adapted from the GraphQL project website, graphql.org.

The first example shows how a client can construct a GraphQL query, asking an API to return specific fields in a shape you’ve specified.

{
  me {
    name
  }
}

A GraphQL API would return a result like this in JSON format:

{
  "me": {
    "name": "Dorothy"
  }
}

A client can also pass arguments as part of a GraphQL query, as seen in this example:

{
  human(id: "1000") {
    name
    location
  }
}

The result:

{
  "data": {
    "human": {
      "name": "Dorothy,
      "location": "Kansas"
    }
  }
}

From here, things get more interesting. GraphQL gives users the ability to define reusable fragments and assign variables.

Suppose you need to request a list of IDs, then request a series of records for each ID. With GraphQL, you could construct a query that pulls everything you want with a single API call. 

So this query:

query HeroComparison($first: Int = 3) {
  leftComparison: hero(location: KANSAS) {
    ...comparisonFields
  }
  rightComparison: hero(location: OZ) {
    ...comparisonFields
  }
}
fragment comparisonFields on Character {
  name
  friendsConnection(first: $first) {
    totalCount
    edges {
      node {
        name
      }
    }
  }
}

Might produce this result:

{
  "data": {
    "leftComparison": {
      "name": "Dorothy",
      "friendsConnection": {
        "totalCount": 4,
        "edges": [
          {
            "node": {
              "name": "Aunt Em"
            }
          },
          {
            "node": {
              "name": "Uncle Henry"
            }
          },
          {
            "node": {
              "name": "Toto"
            }
          }
        ]
      }
    },
    "rightComparison": {
      "name": "Wizard",
      "friendsConnection": {
        "totalCount": 3,
        "edges": [
          {
            "node": {
              "name": "Scarecrow"
            }
          },
          {
            "node": {
              "name": "Tin Man"
            }
          },
          {
            "node": {
              "name": "Lion"
            }
          }
        ]
      }
    }
  }
}

If you are a GitHub user, a quick way to get a hands-on experience with GraphQL is with GitHub’s GraphQL Explorer.

  • Apollo: a GraphQL platform that includes a frontend client library (Apollo Client) and backend server framework (Apollo Server).
  • Offix: an offline client that allows GraphQL mutations and queries to execute even when an application is unreachable.
  • Graphback: a command line-client for generating GraphQL-enabled Node.js servers.
  • OpenAPI-to-GraphQL: a command-line interface and library for translating APIs described by OpenAPI Specifications or Swagger into GraphQL.

An API gateway is an API management tool that sits between a client and a collection of backend services. It acts as a reverse proxy to accept all API calls, aggregate the various services required to fulfill them, and return the appropriate result. The API gateway acts as a component of application delivery (the combination of services that serve an application to users)—intercepting API calls from a user and routing them to the appropriate backend service.

API gateway use cases

Exactly what the API gateway does will vary from one implementation to another. Some common functions include authentication, routing, rate limiting, billing, monitoring, analytics, policies, alerts, and security. API gateways provide these benefits.

Most enterprise APIs are deployed via API gateways. It’s usual for API gateways to handle common tasks that are used across a system of API services, such as user authentication, rate limiting, and statistics.

At its most basic, an API service accepts a remote request and returns a response. But real life is never that simple. Consider your various concerns when you host large-scale APIs.

  • You want to protect your APIs from overuse and abuse, so you use an authentication service and rate limiting.
  • You want to understand how people use your APIs, so you’ve added analytics and monitoring tools.
  • If you have monetized APIs, you’ll want to connect to a billing system.
  • You may have adopted a microservices architecture, in which case a single request could require calls to dozens of distinct applications.
  • Over time you’ll add some new API services and retire others, but your clients will still want to find all your services in the same place.

An API gateway is a way to decouple client interfaces from backend implementations. When a client makes a request, the API gateway breaks it into multiple requests, routes them to the right places, produces a response, and keeps track of everything.

API gateway benefits

  • Low latency: By distributing incoming requests and offloading common tasks such as SSL termination and caching, API gateways optimize traffic routing and load balancing across backend services to ensure optimal performance and resource utilization. By doing so, API gateways minimize server load and bandwidth usage, reducing the need for additional server capacity and network bandwidth and improving user experience.
  • Traffic management: API gateways throttle and manage traffic through various mechanisms designed to control the rate and volume of incoming requests and ensure optimal performance and resource utilization.
    • Rate limiting policies specify the maximum number of requests allowed within a certain time period (e.g., requests per second, minute, hour) for each client or API key, protecting backend services from overload.
    • Request throttling policies define rules and limits for regulating request traffic, such as maximum request rates, burst allowances, and quotas.
    • Concurrency control policies specify the maximum number of concurrent connections or requests that can be handled simultaneously by the backend servers.
    • Circuit breaking policies monitor the health and responsiveness of backend servers and temporarily block or redirect traffic away from failing or slow services to prevent cascading failures and maintain overall system stability.
    • Dynamic load balancing from API gateways continuously monitors server health and adjusts traffic routing in real-time to handle spikes in demand, minimize response times, and maximize throughput.
  • Dynamic scale: API gateways can dynamically scale infrastructure resources in response to changing traffic patterns and workload demands. In this way, API gateways help businesses optimize resource utilization and minimize infrastructure costs, ensuring they only pay for the resources they actually use.
  • Cost effectiveness: API gateways play a role in managing the cost effectiveness of app delivery and API integration by providing a centralized platform for handling API traffic, enforcing security policies, implementing traffic management rules, and facilitating integration with backend services. API gateways also allow for tiered consumption of services to maintain cost effectiveness. Different types of APIs can impact cost effectiveness of an application in several ways.
    • Flexibility: HTTP APIs, which are more general and can use any HTTP method, offer simplicity and flexibility in development, potentially reducing development costs. REST APIs, which adhere to specific architectural principles and conventions, may require additional effort and expertise to design and implement properly, potentially increasing development costs.
    • Infrastructure: Because of their flexibility, HTTP APIs may have lower infrastructure costs. REST APIs may require additional infrastructure components or services to support these features, potentially increasing infrastructure costs.
    • Scalability: HTTP APIs, which can be scaled horizontally by adding more servers or instances, may offer more cost-effective scalability options, particularly in cloud environments with auto-scaling capabilities. REST APIs may have more complex scaling requirements due to statelessness, caching, and distributed architecture considerations, and may require additional infrastructure resources or services to achieve horizontal scalability, potentially increasing costs.

API gateways and Kubernetes

Because a Kubernetes-powered solution, like Red Hat OpenShift, is the most efficient way to containerize and orchestrate applications, an API gateway can be a key component to manage and route traffic to services on a Kubernetes cluster. It does so by accomplishing these tasks:

  • acting as an Ingress controller, intercepting incoming HTTP traffic to the cluster and routing it to the appropriate services based on defined rules and configurations.
  • using Kubernetes’ DNS-based service discovery to discover and route traffic to backend services without manual configuration. This enables seamless integration with Kubernetes-native services and facilitates dynamic scaling, service discovery, and failover handling within the cluster.
  • implementing advanced traffic management policies to control the flow of traffic to services deployed on Kubernetes.
  • enforcing security policies such as authentication, access controls, authorization, and encryption to protect services deployed on Kubernetes from unauthorized access and cyber threats.
  • providing observability and monitoring by creating visibility into traffic patterns, performance metrics, and error rates for services deployed on Kubernetes, such as request logging, metrics collection, and distributed tracing.
  • integrating with service meshes, like Istio and Linkerd, to extend their capabilities and provide additional features such as external ingress, edge security, and global traffic management, ensuring seamless interoperability between Kubernetes services and external clients.

API gateways, DevOps, and serverless environments

In organizations that follow a DevOps approach, developers use microservices to build and deploy apps in a fast-paced, iterative way. APIs are one of the most common ways that microservices communicate. Additionally, modern cloud development, including the serverless model, depends on APIs for provisioning infrastructure. You can deploy serverless functions and manage them using an API gateway. 

In general, as integration and interconnectivity become more important, so do APIs. And as API complexity increases and usage grows, so does the value of an API gateway.

A service mesh is a dedicated infrastructure layer within a software application that handles communication between services. It handles traffic routing, security, observability, and resiliency functions, while abstracting these complexities away from individual services.

Modern applications are often broken down as a network of services performing a specific function. To execute its function, one service might need to request data from several other services. But what if some services get overloaded with requests, like the retailer’s inventory database? This is where a service mesh comes in—it manages communication among services, optimizing how all the moving parts work together.

Think of your last visit to an online store. You might have used the site’s search bar to browse products. That search represents a service. Maybe you also saw recommendations for related products or added an item to your online shopping cart. Those are both services, too. The service that communicates with the inventory database needs to communicate with the product webpage, which needs to communicate with the user’s online shopping cart. The retailer might also have a service that gives users in-app product recommendations. This service will communicate with a database of product tags to make recommendations, and it also needs to communicate with the same inventory database that the product page needed.

Service mesh and microservices

A service mesh can be considered a microservices architecture pattern. Microservices are a style of application architecture where a collection of independent services communicate through lightweight APIs. A microservices architecture is a cloud-native approach to building software in a way that allows each core function within an application to exist independently. Unlike app development in other architectures, individual microservices can be built by small teams with the flexibility to choose their own tools and coding languages. Microservices are built independently, communicate with each other, and can individually fail without escalating into an application-wide outage.

Service-to-service communication is what makes microservices possible. As the microservices architecture grows, it becomes more complex to manage. If an app contains dozens or hundreds of services interacting with each other, challenges arise around network failures, monitoring and tracing, balancing traffic loads, and securing communication among different microservices. Addressing these issues entirely through custom code would be inefficient. Service mesh provides a consistent solution to handle these challenges without having to change the code of individual services.

A service mesh manages the communication between services using a data plane and a control plane. Service mesh doesn’t introduce new functionality to an application’s runtime environment—apps in any architecture have always needed rules to specify how requests get from point A to point B. What is different about a service mesh is that it takes the logic governing service-to-service communication out of individual services and abstracts it to a layer of infrastructure.

To do this, a service mesh is built into an app as an array of network proxies. Proxies are a familiar concept—if you’re accessing this webpage from a work computer, there’s a good chance you just used one. Here’s how it works:

  1. As your request for this page went out, it was received by your company’s web proxy.
  2. After passing the proxy’s security measure, it was sent to the server that hosts this page.
  3. Next, this page was returned to the proxy and checked against its security measures again.
  4. And then it was sent from the proxy to you.

In a service mesh, each service instance is paired with a sidecar proxy that runs alongside each service and intercepts all inbound and outbound network traffic. Each sidecar proxy sits alongside a microservice and routes requests to and from other proxies. The proxy handles tasks like traffic routing, load balancing, enforcing security policies, and collecting telemetry data. Instead of communicating directly with one another, services send requests through their sidecar. The sidecars handle inter-service communication. All of this comprises the data plane.

The control plane manages the configuration and policy distribution across the data plane. The control plane also distributes traffic routing rules, manages security certificates between services, configures components to enforce policies, and collects telemetry.

Without a service mesh, each microservice needs to be coded with logic to govern service-to-service communication. This makes communication failures harder to diagnose because the logic that governs interservice communication is hidden within each service.

Istio and Envoy

Istio is an open source service mesh platform that controls how microservices share data with one another. It controls the flow of traffic, enforces policies, and monitors communications in a microservices environment. It includes APIs that let Istio integrate into any logging platform, telemetry, or policy system. Istio can run in a variety of on-premise, cloud, containerized, and virtualized environments.

Istio’s architecture is divided into the data plane and the control plane. Istio uses Envoy proxies, which are high-performance proxies that are deployed as sidecars and mediate traffic for all services within the service mesh. In the data plane, developers can add Istio support to a service by deploying a sidecar proxy within the environment.

Istio’s service mesh includes a new ambient mode that will remove the need for sidecar proxies in service mesh, replacing them with node-level proxies and intermediate gateways called waypoints. Waypoint proxies run outside of application pods and are managed independently of applications.

Service mesh benefits

Every new service added to an app—or new instance of an existing service running in a container—complicates the communication environment and introduces new points of possible failure. In a complex microservices architecture, it can be nearly impossible to locate problems without a service mesh.

There are several advantages to using a service mesh, including:

  • Improved security: A service mesh uses mutual Transport Layer Security (mTLS) to ensure communication between services is encrypted and secure, and sensitive user data is protected. It adds an extra layer of security without requiring additional encryption be added to each service manually. A service mesh can improve role-based access control (RBAC) and policies for securing APIs, and it can automate certificate management and key rotation.
  • Policy enforcement: Service meshes include centralized configuration for service policies like quotas, rate-limiting, and authentication and authorization. It provides control over service interactions through access policies. Policies are enforced at the proxy level, which helps create consistency across services.
  • Traffic management: Service mesh can help your apps manage traffic to individual services based on load conditions, versions, and user-specific rules. For example, if you’re rolling out a new version of your inventory service, you can use a canary deployment to send only 5% of the traffic to the new service. (A canary is a smaller test deployment.) If that works, you can increase the traffic.
  • Health checks and observability: It can be difficult to view how microservices are interacting in real time, but with a service mesh you can implement built-in observability tools like distributed tracing and metrics collection. Sidecars in a service mesh collect metrics (request counts, latency, error rates) and send them to the control plane or monitoring tools.
  • Fault tolerance and increased resilience: When microservices encounter failures, a service mesh can help by automating retries and fallbacks. If a service fails or becomes unresponsive, the service mesh will retry based on predefined rules and can reroute traffic to alternative services. This means the app can handle failure gracefully when a service becomes unavailable, ensuring users still have a good experience. The service mesh also collects data on how long it took before a retry succeeded. This data can inform future rules on optimal wait time, ensuring that the system does not become overburdened by unnecessary retries.

With a service mesh, development and operations teams are better equipped to handle migration from monolithic applications to cloud-native apps―collections of small, independent, and loosely coupled microservice applications.

Service mesh challenges

Organizations can experience challenges when implementing a service mesh, including:

  • Complexity and integration with existing systems: A service mesh can be difficult to set up, manage, and integrate with existing systems. Organizations can encounter challenges if they are working in a large, distributed environment across multicloud and on-premise systems, or have not previously used a service mesh in their environment.
  • Resource requirements and operational overhead: Service meshes can increase the operational overhead of managing applications because each service instance now has a sidecar proxy, which increases CPU and memory usage. Managing and troubleshooting, particularly in large-scale deployments, can be complex, and maintaining performance and scale can be more difficult as a result.
  • Skills gaps: Teams need training to understand service mesh features, configuration, and best practices. Debugging failures can be severe, especially when issues arise due to complex routing rules or mTLS misconfigurations. Many organizations find that their existing teams lack expertise in service mesh technology, which can present challenges with getting started and using service meshes effectively.

Service mesh vs. API gateway

Compared to an API gateway, a service mesh secures internal service communication while allowing external traffic through the API gateway. A service mesh used together with an API gateway can ensure policies are applied uniformly across internal services.

API monetization is the process by which businesses create revenue from their APIs. Having well­-developed APIs establishes and maintains relationships in a digital economy—allowing others to access and integrate your data and resources into their public or private sites and applications. But API monetization isn’t just about generating revenue with your API, it’s also about keeping APIs in operation.

Consider a company’s research and development lab—one that accepts ideas and integrations from partners then incubates these ideas, applications, and business relationships. Companies like this are using APIs to allow the introduction of outside ideas and talent in hopes of inciting innovation. Other examples include:

  • 3D printing: With the help of web APIs, 3D printing is moving beyond hobby and art. And it has serious potential for re­defining the global manufacturing landscape. There are several platforms focusing on providing APIs for 3D printing.
  • Automobiles: Major automakers, like Ford and GM, are turning vehicles into API platforms, creating opportunities for businesses and developers to provide new products and services in­-vehicle.
  • Home: Devices play a central role in our daily lives. We carry smart devices from home to work, and everywhere else. APIs have made their way beyond our computers and integrate directly with our homes. The next generation of home automation technology is being developed, ranging from thermostats for heating and air conditioning to lighting and home security. While much home automation technology is still without APIs, many providers are introducing developer ecosystems and are using APIs to stimulate innovation around home technology integration.
  • Buildings: Many buildings already have automated heating, air, electrical, water, and other systems. Building equipment manufacturers are quickly seeing the importance of allowing API access to their hardware and software. Imagine if were able to teach us to reduce energy consumption, diagnose problems in real­ time, and self-correct when necessary or call a service provider when repairs or tuning are required.
  • Quantified self: Devices that allow the wearer to understand more about themselves are ubiquitous. Sports and fitness-realted personal activity measurement devices are common among quantified self devices. They can be used for everything from lifestyle tracking to healthcare.

Many companies start by focusing on launching and evolving their API strategy and gaining essential experience before fully executing on their API monetization strategy,­—relying completely on indirect API value. While it is better to have a monetization strategy in place early, others are finding success by prioritizing the API first and monetization second. 

In some cases APIs can lead to entirely new business opportunities outside of the existing business model of an organization. Even in these cases, APIs generally use existing assets or expertise to create opportunities in new ways.

There are 3 reasons why determining the right business model is important for designing effective API programs:

  1. Determining the right business model brings the value of the API to the organization into focus, which drives the decision regarding long-term commitments to the API program. Without that commitment there are rarely resources in place to complete the tasks required for establishing and running an effective API program.
  2. Determining the right business model helps to define the functionality of the product, which is needed to satisfy third parties and generate business.
  3. Determining the right business model ensures attention is paid to roles and responsibilities within an organization, and to who retains which parts of the value generated by the API. This also implies defining what users of the API gain from use of the API and how that balances against what the API provider gains.

Some API resources lend themselves better to a pay-as-you-go model, while some markets demand that data be freely accessible without the need to register or be charged for access. There is no one-size-fits-all approach to API monetization, and a wide mix of monetization strategies can be used.

Free

A popular way to provide access to an API is by offering a free tier so that anyone can sign up, start using an API, and understand the value it delivers. This allows consumers to test the API  and see if it will meet their needs before spending money. While free is a good option for many API monetization strategies, it works well in conjunction with other strategies. If implementing a free tier of the API alone, without a strategy to sell services  to those with greater demands, your organization can face problems.

Consumer pays

After providing free access, the next approach to API monetization is to establish a price that consumers will pay for the services or resources the API provides. We are seeing 3 common approaches to APIs charging consumers:

  • Tiered: Some API providers set up multiple tiers of paid access, such as bronze, gold, or platinum. Each tier has its own set of services and allowances for access to API resources with pricing stepping up in cost for each tier.
  • Pay-as-you go: Another option is a utility-based model, where API consumers pay for what they use. Depending on the amount of bandwidth, storage, and other hard costs incurred around API consumption, providers charge based upon their cost, plus a logical profit.
  • Unit-based:­ Finally, other API providers define each API resource in terms of units and assign a unit price. API consumers pay for the number of units they anticipate using, with the option to buy more when necessary

Some API providers mix and match different combinations of tiered, pay-as-you-go, and unit-based API pricing to recover operational costs as well as generate revenue.

Consumer gets paid

In some cases, an API will drive other revenue streams for companies and can actually share revenue with API consumers. This approach acts as an incentive model for API consumers, encouraging integration and quality implementation of the resources that drive revenue for an API provider. 3 distinct models for sharing API revenue with consumers have emerged:

  • Ad rev­enue share: ­ Some API providers offer an advertising network as part of their platforms. API consumers embed advertising in their sites and apps, providing revenue for API providers. In turn, the API provider returns a portion of the revenue from advertising.
  • Affiliate: Some approaches to monetization of websites have been applied to API ecosystems. Cost per acquisition (CPA), cost per click (CPC), and one-time or recurring revenue-sharing models are commonly used.
  • Credits to bill: A smaller group of API providers employ a paid model for consumers. Based upon advertising revenue share or affiliate revenue, the provider will credit the API consumer’s bill—reducing a developer’s overhead for integration and potentially reducing the API provider’s expenditure.

Indirect monetization

Indirect monetization of an API isn't always about generating revenue from API access, advertising, or other revenue. There are indirect ways that an API can deliver value.

  1. Marketing vehicle: APIs can serve as a marketing vehicle for a company and its online presence. Through sensible branding strategies, developers can become third-party marketing agents, working on behalf of a core company and its brand.
  2. Brand awareness:­ As a new tool in an overall marketing and branding strategy, an API can provide a type of brand exposure via third-party websites and applications, thereby extending the reach of a brand using third-party API consumers as the engine.
  3. Content acquisition: ­Not all APIs are about delivering content, data, and other resources to consumers. APIs often allow for writing, updating, accessing, and deleting content. Content acquisition via API can be a great way to build value within a company and its platform.
  4. Software-as-a-Service (SaaS): SaaS has become a common approach to selling software online to consumers and businesses. Oftentimes an API will complement the core software and its offering, providing value to SaaS users. API access is often included as part of a core SaaS platform, but also can be delivered as an option for premium SaaS users.
  5. Traffic generation: APIs can also be used to drive traffic to an existing website or application. Designing an API to use hyperlinks directed at central websites or apps—and encouraging consumers to build their own websites and apps that are integrated with the API—provides a great opportunity for increasing traffic.

API monetization road­map

API providers and API consumers are constantly building trust and establishing relationships. A key facet of this trust—and the foundation for the relationship—is sharing a common road­map. API providers need to actively involve API consumers with where the API resources are going so consumers can prepare for change, adjust to them, and provide feedback. Nothing will upset API consumers faster than keeping them in the dark about what to expect from the APIs and surprising them with changes to or failures in their applications.

API monetization ecosystems

APIs start with deploying an API area to hang and manage a handful of APIs, where they can be accessed and put to use by consumers. But the goal is to take an API area and evolve it into an active community of API consumers, in hopes of transforming it into a self-service, self-managing ecosystem of passionate partners and consumers.

Sustainable API ecosystems are symbiotic. They’re not just about API providers generating value, they’re also about API users getting the resources they need to be successful and engaging in new approaches to business development.

An active API will attract new users. The users who get the value they are looking for—and the support they need—will ultimately spread the word. A viable API ecosystem is equal parts technology, business, and politics. A balance needs to be struck that doesn't just deliver value for API providers and consumers, but also for end users of web or mobile apps that are integrated with API resources.

Of the many business cases for API monetization, banking APIs represent an essential market evolution as the connective tissue between  digital banks. Cloud technology has particularly improved how banks architect, develop, and operate APIs, so they can break free from the limitations of the past.

Previous generations of API technology were built around traditional architectures that were bound by the limitations of the underlying infrastructure. This approach meant that hosting arrangements were statically defined upfront, which led to the centralization of integration concerns. Eventually, this hampered the bank’s ability to quickly create and adjust APIs as needed.

As new architectures like microservices continue to evolve, so does the technology required to support them, putting even greater demand on API development and management. However, with the right cloud-native processes and tools, APIs and cloud technology can work together to reduce technology complexity and create new value for customers and partners. They can help banks move beyond the limitations of traditional approaches to APIs and gain new levels of nimbleness and efficiency.

More specifically, with cloud-native technology, banks can streamline the process of creating and managing APIs without the burden of managing the underlying infrastructure, while applying non-intrusive policy enforcement to the runtimes that support them.

Service mesh vs. API management for banking

Consistent, effective management of APIs is critical within the banking industry. This e-book explains both API management and service mesh approaches, when to choose one over the other, and how to set up a comprehensive service management architecture using both solutions together to keep banking data safe and services reliable.

Banking API security

The data flowing inside and outside of the bank has long been a target for criminals who wish to use it as part of their nefarious activities. APIs have become a popular attack vector for criminals to gain access and manipulate bank information. 

The growing sophistication of attacks means that banks need much more than API access control as they think about API protection and API security. Traditional security approaches were based on a "castle-and-moat" model which has proven to be unsuitable against today’s criminals.  Cloud platforms can help ease adoption of a zero trust model with a built in service mesh to provide API protection that assumes any communication from any source is untrusted by default.

By also including a Kubernetes-native approach to API management, the bank can take advantage of the underlying capabilities of the cloud platform to provide the strongest possible API security posture for its data in motion.

Banking API vs. clouds

APIs are fundamentally about exchanging data, and the goal is to make that exchange simple, secure, and reliable. Having a sound API design can ensure that the bank does this as efficiently as possible. Cloud platforms can enforce API best practices and API standards through rules applied within the deployment pipeline and enforced through a standard policy enforcement point. This enforcement helps to keep API definitions simple and consistent.

Cloud technology can also make your architecture more nimble by enforcing good API design principles when it comes to API granularity. Banks can use containers to adopt a microservice-based architecture  so that they can break APIs into right-sized pieces that can evolve independently and provide the nimbleness that is expected from a cloud-native architecture while maintaining the security and reliability that is required for communication inside and outside of the bank.

Banking API development

The ability to rapidly evolve APIs can provide an unprecedented advantage to banks. Many banks have adopted agile practices and principles to speed up development, but traditional distributed technology has limited the benefits. Cloud platforms can empower agile teams to evolve APIs without the need to request infrastructure or other supporting resources. This means that teams can spend more time focusing on creating value with API programming instead of raising tickets for resources. Cloud platforms also aid in API discovery and use among distributed developers who need their applications to integrate with existing services.

With a Kubernetes-native approach developers can streamline software delivery and get their new features to users faster.  Developers have additional services within the cloud platform for API design and testing along with other technology to support full stack development that works in conjunction with deployment pipelines. This empowers teams to quickly adjust when they need to, while adhering to software delivery best practices.

Banking API operations

API consumers have service-level expectations of the APIs provided by the bank. Poor performing APIs and outages can negatively impact the bank’s reputation but also can be costly to return to service. APIs have dependencies on other systems and components in order to function properly and meet service-level obligations.

The service mesh within the cloud platform goes well beyond traditional approaches to monitoring APIs. They can automatically detect slow downs and shut off communication until the impacted component can recover. Cloud platforms can extend the value of service meshes by automatically identifying when instances are unhealthy and can take corrective action to bring them into a healthy state. These cloud based capabilities not only improve availability, but also reduce the cost of running operations. 

Cloud platforms also have built-in pipelines to support continuous delivery. This can take the pain out of cumbersome deployment practices when new versions of the API need to be released. It also enables banks to employ canary deployments so that traffic can gradually be migrated over to the new version of the API and ultimately reduce deployment risk.

API design

API design refers to the process of developing APIs that expose data and application functionality for use by developers and users.

An effective API program builds on an organization’s overarching corporate strategy and objectives. You’ll know you have the makings of a great strategy when you can answer the following 3 questions in a clear way:

  1. Why do we want to implement APIs?
  2. What concrete outcomes do we want to achieve with these APIs?
  3. How do we plan to execute the API program to achieve that?

API design implementation

People often misinterpret this question in different ways. Firstly, rather than focus on the value of the API, it’s helpful to think of the value of the effect of the API. Remember, it’s the organization’s core business that’s valuable, not necessarily the API. An API is valuable when it becomes a channel that provides new types of access to the existing value an organization delivers.

Another common misconception is believing that for an API to be valuable users must be prepared to pay for it. This is true only if the API itself is the product. In most models, this is not the case. APIs are usually driving some other metric—sales, affiliate referrals, brand awareness, etc. The value of the API to users is the result of an API call (service request and response), rather than the call itself.

The most common business drivers for establishing an API program, according to a survey of 152 organizations conducted by the Cutter Consortium and Wipro, are to develop new partnerships, to increase revenue, to exploit new business models, to improve time to market, and to develop new distribution channels. The top technology drivers are to improve application integration, improve mobile integration, and support the connection to more devices. The benefits to the organization need to be strong enough to make the decision to invest in the APIs an obvious choice for the organization.

API design outcomes

The second question should be “What concrete outcomes do we want to achieve with these APIs?” In other words, “What do the APIs actually do and what impact do they have on the wider business strategy?”

Both the concepts of the internal view and the external view of an organization can help to define the what of the API. The internal view refers to specific, valuable assets an organization possesses. The more valuable and unique the services and resources offered the more suitable they are for an API program.

An organization that has unique data could take advantage of this resource by allowing access to the data via API. Unique content, data, and services can make access to the API extremely valuable.

When deciding what an API should do for a business, both internal and external views

need to be examined. The decision about the what is then usually a combination of the 2 views.

In concrete terms, while the why is unlikely to change often, the what may vary significantly based on external factors—such as markets, technical considerations, or economic conditions. Internal directions about the value of an asset may change, which could also affect what should be achieved with an API.

API design process

The final question, “How do we design our API program to achieve what we want?” is all about implementation and execution.

Teams must ask themselves:

  • What technology is used to build the APIs?
  • How are the APIs designed?
  • How are the APIs maintained?
  • How are the APIs promoted inside the organization or marketed to the outside world?
  • What resources are available?
  • Who should be on the team?
  • How do we track success against the business goals that have been set?

API design teams

An API team is most closely related to a product team—whether your customers are internal or external, you are in charge of building, deploying, operating, and optimizing the infrastructure others depend on.

Just like product teams, API teams can also be quite varied, but typically they should include a product-centric person who acts as the keeper of strategy and goals, design-focused team members who ensure best practice in API design, engineers who put the API technology in place, and operations team members who will run the API.

Over time you may also have additional people involved, including support and community team members, API evangelists, security representatives, and others.

John Musser highlighted 5 keys to a great API in his 2012 talk at the O’Reilly Open Source convention:

  1. Provide a valuable service
  2. Have a plan and a business model
  3. Make it simple, flexible, and easily adopted
  4. It should be managed and measured
  5. Provide great developer support

The first key—to provide a valuable service—is especially important when thinking about the why of your API program. The value proposition is the main driver for success of the API. If an API has the wrong value proposition (or none at all) it will be very difficult or impossible to find users.

Almost any company with an existing product, digital or physical, can generate value through an API, if that API links to existing offerings and enhances them. As long as the API is structured in a way that covers meaningful use cases for developers, it will deliver value.

Creating value from APIs

The first step to finding and describing the value of APIs is describing jobs users are trying to get done. For example:

  • Automatically sending urgent communications to team members in an emergency
  • Backing up critical files to ensure they are never lost
  • Collecting sample data to detect certain events

The second step is identifying particular challenges that affect users before, during, or after trying to get a job done:

  • Ensuring the reliability of sending with multiple tries, detecting failures, worrying about many messages being sent rather than just one, and integrating with different message delivery systems depending on the location of the user
  • Ensuring the safe delivery of files, but also minimizing the amount of transfer bandwidth
  • Dealing with massive amounts of data and attempting to correlate it in real time

The third step is to summarize the potential gains a user could achieve:

  • Sending other types of notifications, which create opportunity rather than warn of threat
  • Getting rid of other storage equipment if reliability meets your needs
  • Automatically triggering actions based on the events

When examining these pain points, think broadly and list things like support, documentation, or developer portals—everything that a customer could use. Next, outline how you intend to eliminate or reduce some of the things that may be annoying to API users before, during, or after trying to complete a job—or issues that prevent them from doing so. Then describe how you intend to create gains of any sort for your API users.

Through engaging in this process, our 3 examples above might result in:

  • A multichannel messaging API with a single call to deliver messages and the ability to retry automatically until arrival is guaranteed (e.g., Twilio, PagerDuty).
  • A storage synchronization API with optimized calls to efficiently check if new versions should be synchronized (e.g., Bitcasa, Box).
  • An API aggregating several data sources into a configurable stream, which could be filtered, sampled, and easily manipulated (e.g., GNIP, DataSift).

Finally, a useful clarification exercise is to compose several statements that make the fit between the API and the user profile clear. If you find it hard to identify such fit statements, then the API model needs to be reconsidered. Maybe there are API features which need to be added, revised, refined, or eliminated. It could also be that your API does offer great value, but you are trying to address the wrong type of users.

When you condense and abstract your fit statements into one overarching statement, it becomes your value proposition for your APIs. In the case of the messaging API above this might be something like:

“Our messaging API provides enterprise developers a reliable, guaranteed, no-latency text messaging functionality for highly-critical business applications. The API is also supported by software development kits (SDKs) covering the most popular programming languages for quick integration.”

In some cases you might think “This seems like too much work. We’re just creating an internal API.” However, focussing on value is key, even in internal use cases. A poorly determined value proposition will lead to difficulty pitching the value of the API to other teams. A well-defined value proposition can help ease adoption and make the API program a key contributor to the business.

To help define your own API program’s value, consider these 5 questions:

  1. Who is the user? This question should be answered in terms of their relationship to you (are they existing customers, partners, external developers), their role (are they data scientists, mobile developers, operations people) and their requirements or preferences.
  2. What user pains are we solving and/or what gains are we creating for the user? This question should be answered in relationship to the customer’s business, challenges and gains defined by the value proposition, and whether or not a critical need is being fulfilled (is it a pain point, is it a revenue opportunity), and what metric is being improved for the user (speed, revenue, cost saving, being able to do something new).
  3. Which use cases are supported with your API? Identify, with the help of the value proposition, the solutions to your user’s challenges or opportunities created by the API  that are most effective your organization and the user. Plan your API to address these use cases.
  4. How can the value for the user be expanded over time? Plan your value proposition with future changes in mind. What are important upcoming milestones relating to internal or external changes?
  5. What value is being created for your organization internally? Consider internal benefits and how the API can be of value within the business.

API design implementation

Good API design has some core principles, which may differ in implementation. Here’s an analogy: every car has a steering wheel, brake pedals, and an accelerator. You might find that the hazard lights, the trunk release, or radio are slightly different from model to model, but it’s rare that an experienced driver can’t figure out how to drive a rental car.

This level of “ready-to-drive” design is what great API teams strive for—APIs which require little or no explanation for the experienced practitioner to begin using them.

API design simplicity

Simplicity of API design depends on the context. A particular design may be simple for one use case but very complex for another, so the granularity of API methods must be balanced. It can be useful to think about simplicity on several levels, including:

  • Data format: Support of XML, JSON, proprietary formats, or a combination.
  • Method structure: Methods can be very generic, returning a broad set of data, or very specific to allow for targeted requests. Methods are also usually called in a certain sequence to achieve certain use cases.
  • Data model: The underlying data model can be very similar or very different to what is actually exposed via the API. This has an impact on usability, as well as maintainability.
  • Authentication: Different authentication mechanisms have different strengths and weaknesses. The most suitable one depends on the context.
  • Usage policies: Rights and quotas for developers should be easy to understand and work with.

API design flexibility

Making an API simple may conflict with making it flexible. An API created with only simplicity in mind runs the risk of becoming overly tailored, serving only very specific use cases, and may not be flexible enough for other use cases.

To establish flexibility, first find out what the potential space of operations is based on, including the underlying systems and data models, and defining what subset of these operations is feasible and valuable. In order to find the right balance between simplicity and flexibility:

  • Try to expose atomic operations. By combining atomic operations, the full space can be covered.
  • Identify the most common and valuable use cases. Design a second layer of meta operations that combine several atomic operations to serve these use cases.

Arguably, the concept of hypermedia as the engine of application state (HATEOAS) can further improve flexibility because it allows runtime changes in the API and in client operations. HATEOAS does increase flexibility by making versioning and documentation easier, however, in API design, many questions must be considered.

Critical questions for API design

In order to think through your API design, consider the following 5 questions:

  1. Have we designed the API to support our use cases? The next step after identifying the main use cases is to design the API so that it supports these use cases. Flexibility is important so as not to exclude any use cases that may be less frequent, but should still be supported to allow for innovation.
  2. Are we being RESTful for the sake of it? RESTful APIs are quite fashionable, but you shouldn't follow this trend just for the sake of fashion. There are use cases which are very well suited for it, but there are others that favor other architectural styles, such as GraphQL.
  3. Did we expose our data model without thinking about use cases? An API should be supported by a layer that abstracts from your actual data model. As a general rule, don’t have an API that goes directly to your database—although there may be cases which require that.
  4. Which geographic regions are most important and have we planned our datacenters accordingly? API design must also cover nonfunctional elements, such as latency and availability. Make sure to choose datacenters that are geographically close to where you have most of your users.
  5. Are we synchronizing the API design with our other products? If the API is not the sole product of your business, make sure that the API design is coordinated with the design of the other products. It may be that you decide to completely decouple API design from other products. Even if this is the case, plans to decouple API design from other products needs to be made clear and communicated both internally and externally.

API design and developers

A key metric to improve API design for easy adoption is the time to first hello world (TTFHW). In other words, how long does it take a developer to reach a minimum viable product with your API? This is a great way to put yourself in the shoes of a developer who wants to test your API to see what it takes to get something working.

When you define the start and end of the TTFHW metric, we recommend covering as many aspects of the developer engagement process as possible. Then optimize it to be as quick and convenient as possible.

Being able to go through the process quickly also builds developer confidence that the API is well organized, and things are likely to work as expected. Delaying the “success moment” too long risks losing developers.

In addition to TTFHW, we recommend another metric: "Time to first profitable app" (TTFPA). This is trickier, because “profitable” is a matter of definition, depending on your API and business strategy. Considering this is helpful because it forces you to think about aspects related to API operations as part of the API program.

The 2 underlying principles of developer experience are:

  • Design a product or service that provides a clear value to developers and addresses a clear challenge or opportunity. This can be monetary value or some other value, such as a way to increase reach, brand awareness, customer base, indirect sales, reputation for the developer, or simply the joy of using great technology that works.
  • The product needs to be easily accessible. This can include having a lightweight registration mechanism (or none at all), access to testing features, great documentation, and a lot of free and tidy source code.

We suggest that most API programs should have a developer program—regardless of whether you expose your APIs publicly, to partners only, or internally only. The provisions may be more or less elaborate depending on the audience.

API design developer portals

The developer portal is the key element of a developer program; this is the core entry point for developers to sign up, access, and use your APIs. Getting access to your API should be simple and easy for developers. They should be able to get started quickly.

TTFHW is the best metric to measure this. You should also consider streamlining the sign-up process—the simpler and quicker, the better. A recommended best practice is that developers should be able to invoke your APIs to examine their behavior (request and response) without any sign-up at all. Also, supplementary content—such as getting started guides, API reference documentation, or source code—are great to lessen the learning curve.

API design questions for developer experiences

  1. How do we explain the value of the API in the first 5 minutes? Develop an “elevator pitch” about the value proposition of your API that best speaks to developers.
  2. What is our TTFHW and TTFPA and how do we reduce it? This is a powerful way to improve the developer friendliness of your API by thinking about the end-to-end TTFHW. We recommend keeping the TTFHW and TTFPA metrics in mind when considering any elements that are added to the developer experience (like portals), and every aspect  of the API that changes.
  3. What is the onboarding process for developers, and is it as easy as possible? This needs to be in-line with the use cases of your API. The level of security naturally needs to be higher for more sensitive APIs or data access, which probably needs more formal agreements. For everything else it should very simple and straightforward to allow for early developer success (TTFHW).
  4. Are we allowing enough flexibility to make the API attractive for developers? It’s great if you’ve found the right value proposition, and developers sign up for your API. Keep in mind that helping them to be successful will retain and grow their numbers.
  5. How do we support developers if they face problems? We believe in the self-service approach, which will help you to scale. Many developer questions can be covered by good documentation, FAQs, or forums. But self-service has its limits, and for more in-depth questions or other complications, like invoice problems, there should be some type of support mechanism in place.
  6. Can our documentation support innovation? What support is there for developers who deviate from the normal use cases or wish to do something new? Great ideas can come from anywhere.

API design acceleration via ecosystem partners

As an API provider you are operating in an ecosystem of partners and vendors. These partners often have their own content distribution and communication networks and means. We recommend identifying alliances, which can be effective in helping to increase the adoption of your API. Often such alliances can be found when APIs are complementary and provide value to developers when combined.

API management refers to the processes for distributing, controlling, and analyzing the APIs that connect applications and data across the enterprise and across clouds.

Managing these APIs allows an organization to make sure the APIs are used in compliance with corporate policies and allows governance by appropriate levels of security, as some services may require different security policies than others. It allows organizations that create APIs—or use others’ APIs—to monitor activity and ensure the needs of the developers and applications using the API are being met. API management tools enable organizations to share API configurations, control access, collect and analyze usage statistics, and enforce security policies across their network and API consumers. In most cases, these organizations adopt a microservices architecture in order to meet demands by speeding up software development.

HTTP-based APIs have become the preferred method for synchronous interaction among microservices architectures while REST APIs facilitate API keys, per-client throttling, request validation, and private API endpoints. These APIs are the glue that connects all of the microservices together. Managing these APIs allows an organization to make sure the APIs are used in compliance with corporate policies and allows governance by appropriate levels of security, as some services may require different security policies than others.

API management benefits

API management is largely about centralizing control of your API program—including analytics, access control, monetization, and developer workflows. An API management solution, like Red Hat 3scale API Management, provides dependability, flexibility, quality, and speed. To achieve these goals, and ensure that both public and internal APIs are consumable and secure, an API management solution should provide access control, rate limits, and usage policies at the minimum. Most API management solutions generally also include the following capabilities:

  • A developer portal: A developer portal is a common best-practice for API management. Developer portals typically provide API documentation along with developer onboarding processes like signup and account administration.
  • An API gateway: The API gateway is the single point-of-entry for all clients. The gateway also determines how clients interact with APIs through the use of policies.
  • API lifecycle management: APIs should be manageable from design, through implementation, until retirement. Red Hat 3scale API Management is a leader in API lifecycle management.
  • Analytics: It’s important to know what’s going on with your APIs—which consumer or app is calling which API and how often. It’s also essential to know how many APIs have failed and why.
  • Support for API monetization: Monetize access to the microservices behind the APIs through usage contracts. API management allows you to define usage contracts based on metrics, like the number of API calls. Consumers can be segmented and differentiated access tiers, and service quality can be offered to different segments.

These capabilities are considered during the API’s design so that the API can use self-managed or cloud components to provide traffic control, security, and access policy enforcement. Well designed APIs can be shared, secured, distributed, controlled, and monetized on an infrastructure platform built for performance, customer control, and future growth.

API management and microservices

Microservices and APIs are the foundation for rapidly developing innovative application components to meet new business needs—an approach known as cloud-native application development. This approach is not without its challenges, though.

The key technical challenge to forming microservices is breaking up larger systems into their smaller components. APIs allow these smaller components to connect with data sources and each other.

Another challenge presented by a microservices architecture is how to coordinate the many frequently changing microservices. Service discovery makes this easier. API management services provide the necessary discovery mechanisms to ensure that available microservices can be found and documentation on how to use them is shared through the developer portal.

Microservices require an integrated approach to security. Security mechanisms differ depending on the type of API: external-facing services require different security mechanisms than internal ones. For less mission-critical APIs, simple protection with API keys is usually sufficient. For external or critical APIs, a more secure approach, like OAuth, will be required.

When designing APIs for microservices, consider these questions:

  1. How do we control access to our API?
  2. How do we capture metrics and handle alerts?
  3. How should spikes in usage be managed?
  4. Who is responsible for API uptime?
  5. How do we feel about undesired API usage?

API management platforms

API management platforms are made of several key components that provide end-to-end services to facilitate application integration, management, security, and monitoring of APIs.

An API gateway intercepts all incoming client requests and sends them through the API management system to connect to backend services.

A developer portal is a web-based dashboard that provides developers with self-service access to API documentation, code samples, software development kits (SDKs), and other resources for API developers.

Part of API management is providing an easy way to manage and monitor the entire API lifecycle. API lifecycle management tools include features for designing APIs using industry-standard specifications, versioning APIs, managing change control processes, and tracking API usage and performance over time.

API management platforms contain security and access control components to enforce authentication, authorization, and security policies to protect APIs and sensitive data from unauthorized access and abuse. They include features such as OAuth and OpenID for identity and access management.

A complete API management platform also includes analytics and monitoring tools to provide insights into API usage, performance, and behavior. These tools monitor API traffic volume, response times, error rates, and popular endpoints in real-time. They also provide historical data and trend analysis to identify performance bottlenecks, optimize API usage, and measure the impact of APIs on business objectives.

A successful API management solution should also allow for ecosystem expansion to integrate externally with other systems or partners or internally with systems in their organization’s network.

Measuring API management efforts

Without measuring the effects of our efforts we have no way of evaluating our success. Analytics provides data about API activities but we still must provide a definition of success. When defining success in your organization consider these 5 key performance objectives for APIs:

  1. Dependability. Dependability is the availability of the API to developers. A useful metric for measuring dependability is downtime. Is the API always available for use? Another metric is quota which defines how many API calls can be made by a developer within a certain time frame. A quota protects an API from abuse and makes its management more predictable. Some API providers’ business models and price plans are based on quotas.
  2. Flexibility. Flexibility refers to the options developers have when adopting APIs. Greater flexibility in an API means greater effort (and cost) for the organization managing the API.
  3. Quality. Quality is the consistent conformance of the API’s behavior to developer’s expectations. It is a way of measuring developer’s satisfaction with the API.
  4. Speed. Speed can be measured by access latency and throughput. Speed can be influenced by techniques like throttling or caching.
  5. Cost. The goal of measuring cost is to provide developers with the best value for your money. All of the other 4 objectives contribute, in one way or another, to the cost objective.

Because our modular, lightweight, and comprehensive API management solutions are open source, built on open standards, and available on-premise or in the cloud. This helps your team connect everything—apps to data, legacy to new—even as you grow. Our unique development model and commitment to open source technologies means our portfolio undergoes extensive testing by a diverse community—including Red Hat engineers, customers, independent software and hardware vendors, and partners.

As with all open source projects, we contribute code and improvements back to the upstream codebase, sharing advancements along the way. Of course, collaborating with a community is about more than developing code. Collaboration is about the freedom to ask questions and offer improvements—that’s the open source way and the power of the open organization. It’s why Red Hat has been a trusted provider of enterprise infrastructure for over 25 years.

Manage your APIs with 3scale

Red Hat® 3scale API Management is an API management platform that makes it easy to share, secure, distribute, control, and monetize your APIs. 3scale API Management is designed to scale and support hybrid architecture (on-premise, in the cloud, or any combination of the two). Want to expose and monetize the access to your APIs? Use the integrated developer portal and the platform integration with Stripe, Baintree, and Adyen to enable easy, end-to-end billing between API consumers and providers.

3scale API Management is divided between an API program management layer and an API traffic control layer. More traditional methods of traffic control took longer because an entire API call had to be authenticated. This new traffic control layer only needs to inspect the header of the incoming call, so that traffic is scanned and authenticated more quickly. Access, policy, and traffic controls make it simple to authenticate traffic, restrict by policy, protect backend services, impose rate limits, and create access tiers.

The two layers communicate with each other asynchronously, using configurable caching strategies on the API gateway—so that if, for example, the API management policy configuration is for some reason unavailable, the API program isn’t slowed down and is still functional. Because the API manager and API gateway are separated, you can scale independently and support more complex deployment options.

3scale API Management includes a fully customizable developer portal so that developers get everything they need (account and app management, analytics, API key management, etc.) in a single, easy-to-use location. An interactive API documentation tool allows developers to examine live APIs, and an analytics engine provides everything you need to know about your API performance and traffic patterns. Finally, you can “package APIs” differently into different products, defining and configuring different policies for different API consumers. This gives you the freedom to create unique business models on top of APIs to address different customer needs.

Managed service for API management with OpenShift API Management

Red Hat OpenShift API Management is a hosted and managed API management service delivered as an add-on product to Red Hat OpenShift Dedicated, a fully managed service of enterprise Kubernetes platform Red Hat OpenShift.

OpenShift API Management supports teams that want to take an API-first approach to building microservices-based applications so they can modernize existing systems, increase developer productivity, and deliver new applications faster.

Red Hat hosts, manages, and provides dedicated support for both OpenShift Dedicated and OpenShift API Management, including configuration, maintenance, and upgrades, so teams can focus on development rather than managing Kubernetes infrastructure.

With OpenShift API Management, you can:

Red Hat OpenShift API Management is a hosted and managed API management service delivered as an add-on product to Red Hat OpenShift Dedicated, a fully managed service of enterprise Kubernetes platform Red Hat OpenShift.

OpenShift API Management supports teams that want to take an API-first approach to building microservices-based applications so they can modernize existing systems, increase developer productivity, and deliver new applications faster.

Red Hat hosts, manages, and provides dedicated support for both OpenShift Dedicated and OpenShift API Management, including configuration, maintenance, and upgrades, so teams can focus on development rather than managing Kubernetes infrastructure.

With OpenShift API Management, you can:

  • Deploy, monitor, and control APIs throughout their entire life cycle
  • Create policies governing security and usage
  • Use existing identity management systems through a declarative policy without requiring custom code
  • Gain insight into health and use of APIs
  • Discover and share APIs by publishing to internal or external developer portals

OpenShift API Management, when added to OpenShift Dedicated, provides a streamlined developer experience for building, deploying, and scaling cloud-native applications. Monitor, configure, and publish all of your APIs from a unified, developer-friendly interface.

An API manager allows you to connect internal and external applications across multiple clouds, enforce company policies and governance, including rate limits and usage, and manage APIs through every stage of the development life cycle.

Manage your application connection with Connectivity Link

Red Hat Connectivity Link is a Kubernetes-native solution that helps you manage your applications to connect and communicate across different cloud environments. As a connectivity management tool, Connectivity Link is designed to simplify and enhance application connectivity, application management, and security across multicloud and multicluster environments.

With Red Hat Connectivity Link, you can simplify complex multicloud environments. As you and your team adopt multicloud strategies, the complexity of managing application connectivity across these environments is growing. Connectivity Link provides automated, consistent connectivity management, which is essential for maintaining agility and reducing operational overhead.

Built on the foundation of the Kuadrant open source project, Connectivity Link uses Gateway API and Envoy proxy to provide a unified, efficient approach to managing incoming and outgoing network traffic. Gateway API provides ingress traffic management across Kubernetes clusters, and Envoy is the default ingress gateway that simplifies deployment across clusters. Within the Envoy ingress gateway, a WebAssembly plugin (WASM) provides hardware-independent processing, allowing extensibility and compatibility across any environment where Envoy is deployed.

To enhance the Kubernetes ecosystem, Connectivity Link provides integration capabilities to other Red Hat products such as Red Hat OpenShift® and OpenShift Service Mesh. Red Hat Connectivity Link focuses on simplified multicluster application connectivity and integrated advanced traffic management and policy enforcement directly into Kubernetes and OpenShift environments. When integrated with Connectivity Link, OpenShift Service Mesh manages traffic routing and security using Envoy and Istio.

Keep reading

Article

What is an API?

API stands for application programming interface—a set of definitions and protocols to build and integrate application software.

Article

What does an API gateway do?

An API gateway is an application programming interface (API) management tool that sits between a client and a collection of backend services.

Article

Why Red Hat for APIs?

Our API solutions focus on reusability, IT agility, and a management interface that helps you measure, monitor, and scale.

More about APIs

Products

An infrastructure platform that lets you share, distribute, control, and monetize your application programming interfaces (APIs).

Resources

Podcast

Command Line Heroes Season 2, Episode 6:

"The data explosion"

E-book

The API owner's manual