What is Istio?

Copy URL

Istio is an open source service mesh that controls how microservices share data with one another. It complements and extends Kubernetes to control the flow of traffic, enforces policies, and monitors communications in a microservices environment. It includes APIs that let Istio integrate into any logging platform, telemetry, or policy system. Istio can run in a variety of on-premise, cloud, containerized, and virtualized environments.

Microservice architectures split the work of enterprise applications into modular services, which can make scaling and maintenance easier. However, as an enterprise application built on a microservice architecture grows in size and complexity, it becomes more difficult to observe and manage. Service mesh can address architecture problems by capturing or intercepting traffic between services and can modify, redirect, or create new requests to other services.

Istio’s architecture is divided into the data plane and the control plane. Istio uses Envoy proxies, which are high-performance proxies that are deployed as sidecars and mediate traffic for all services within the service mesh. In the data plane, developers can add Istio support to a service by deploying a sidecar proxy within the environment. These sidecar proxies sit alongside microservices and route requests to and from other proxies. Together, these proxies form a mesh network that intercepts network communication between microservices. The control plane manages and configures proxies to route traffic. The control plane also configures components to enforce policies and collect telemetry.

Building Resilient Microservices with Istio and Red Hat OpenShift Service Mesh

Traffic management

Istio provides fine-grained control of traffic flow between services. It has advanced traffic routing capabilities, including support for different testing and deployment methods, including: 

  • A/B testing, which involves comparing two releases to each other.
  • Canary deployment means releasing a smaller test deployment.
  • Blue-green deployment means creating two separate, identical environments to reduce downtime and mitigate risk. 

Istio also handles load balancing across service instances. This means that with Istio, outbound traffic from a service is intercepted by its sidecar proxy. The proxy forwards the request to the appropriate destination based on routing rules defined in the control plane.

Service discovery and resiliency 

Istio includes capabilities for automatically discovering services within the mesh. It can do fault injection testing to simulate failure scenarios and assess how a system behaves in unusual conditions. Istio has resiliency mechanisms such as retries, timeouts, and circuit breaking. The control plane keeps track of all service instances and their locations, and when a new service instance starts, it registers itself with the control plane.

Observability and extensibility

Istio provides observability and extensibility. It offers distributed tracing through integrations with tools like Jaeger or Zipkin and metrics and telemetry using Prometheus. It includes detailed service-level dashboards for visualizing communication between services. Sidecar proxies collect metrics such as request counts, latency, and error rates, and send them to the control plane or monitoring tools. Istio can be integrated with external systems like monitoring tools, logging systems, and custom policy engines, which allows new capabilities and functionality to be added to the service mesh.

Security and policy enforcement

Mutual transport layer security (mTLS) provides privacy and security between 2 applications by authenticating both parties mutually. In a TLS model, the authentication goes 1 way—the server authenticates the client. With mTLS, the client and the server or the website and web browser authenticate each other mutually. Istio uses mTLS for secure service-to-service communication, and also uses role-based access control (RBAC) and policies for securing APIs, as well as certificate management and automatic key rotation.

Istio centralizes configuration for service policies like quotas, rate-limiting, and authentication/authorization. It gives you fine-grained control over service interactions through access policies. Policies for authentication, rate-limiting, or access control are enforced at the proxy level, ensuring consistency across services.

Find out more about Istio at Red Hat Developer

Istio includes ambient mode, a new data plane mode that is sometimes referred to as “sidecar-less” because in ambient mode, workload pods no longer require sidecar proxies to participate in the mesh. In ambient mode, sidecar proxies are replaced with a data plane integrated into the infrastructure, which still maintains Istio’s zero-trust security, telemetry, and traffic management. By eliminating sidecars, ambient mode also reduces the infrastructure resources required for CPU and memory consumption. In Istio’s normal mode, the Envoy proxy touches every application pod but in ambient mode, application pods remain untouched and have their own application containers. 

Istio service mesh can be used to achieve several specific goals and tasks. Below are several use cases for Istio and examples of how service mesh can help an organization achieve its goals. 

Microservices traffic control

A large e-commerce platform deploys frequent updates to its services such as its cart features, payment options, and inventory. With a service mesh, the organization can use canary deployments to roll new features out gradually to a subset of users. It can use blue-green deployments to move traffic from the old version to the new version without downtime or disruption to the user experience. A/B testing helps the organization route specific percentages of traffic to different service versions.

Secure service-to-service communication

A financial services company processes sensitive user data across multiple services to manage accounts and detect fraudulent activity. Using service mesh, it is able to enforce mTLS for enhanced security to encrypt communication between services. The service mesh also provides granular RBAC for service interaction.

Resiliency and fault tolerance

A video streaming platform wants to ensure uninterrupted playback even if a specific service fails or becomes slow. Using a service mesh provides circuit breaking capabilities to automatically stop sending requests to failing services. With retries with exponential backoff, failed requests are prompted intelligently to retry. Service mesh load balancing helps distribute traffic across healthy service instances. 

Observability and monitoring

A software-as-a-service (SaaS) platform running on Kubernetes needs to diagnose latency issues across dozens of microservices. A service mesh provides distributed tracing that allows developers to track requests across services. It also offers real-time telemetry, including error rates and traffic patterns.

API gateway integration

An API gateway is used to expose services to external clients while internal services communicate within the mesh. Using a service mesh helps secure internal service communication while allowing external traffic to flow through the API gateway. Service mesh also applies policies uniformly so organizations can be sure that rules such as rate-limiting are enforced consistently across internal services.

Regulatory compliance

A healthcare provider must comply with Health Insurance Portability and Accountability Act (HIPAA) requirements for secure data transmission. With a service mesh, the provider can enforce encryption standards such as TLS. Service mesh also provides detailed audit logs of service communication for compliance and documentation.

Dynamic environments

A gaming company frequently scales services up and down during peak gaming hours or promotional events. A service mesh can automatically discover and route traffic to newly created service instances. Service mesh also ensures consistent performance during scaling operations.

Explore Red Hat OpenShift Service Mesh 3.0 in tech preview

Red Hat® OpenShift® Service Mesh, which is based on the Istio project, addresses a variety of problems and use cases in a microservice architecture by creating a centralized point of control in an application. OpenShift Service Mesh adds a transparent layer on existing distributed applications without requiring any changes to the application code. The mesh introduces an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring.

Red Hat OpenShift Service Mesh is tested and optimized for Red Hat OpenShift. It provides compatibility with OpenShift-specific features like operators and continuous integration and continuous delivery (CI/CD) pipelines. It comes with Red Hat’s enterprise support and is regularly updated and patched for security and stability. OpenShift Service Mesh works across multiple Red Hat OpenShift clusters, creating consistency across hybrid cloud or multicloud environments. It facilitates multi-tenancy, allowing organizations to manage separate service meshes for different teams or workloads. Its built-in security features enable mTLS for all services by default and integrate with Red Hat OpenShift’s OAuth for trusted authentication and authorization capabilities.

Resource

Getting started with Red Hat OpenShift Service Mesh

This e-book gives guidance on configuring Red Hat OpenShift Service Mesh for production use and on performing Day 2 operations.

Red Hat OpenShift Service Mesh

Red Hat OpenShift Service Mesh provides a uniform way to connect, manage, and observe microservices-based applications.

Keep reading

What is CentOS Stream?

CentOS Stream is a Linux® development platform where open source community members can contribute to Red Hat® Enterprise Linux in tandem with Red Hat developers.

What is KVM?

Kernel-based virtual machines (KVM) are an open source virtualization technology that turns Linux into a hypervisor.

What is Podman Desktop?

Podman Desktop is a free, open source tool that simplifies working with containers in a local developer environment.

Open source resources