Coming into this year, CoreOS’s Alex Polvi predicted that Istio, an open source tool to connect and manage microservices, would soon become a category leading service mesh (essentially a configurable infrastructure layer for microservices) for Kubernetes. Today we celebrate a milestone that brings us closer to that prediction: celebrating the general availability of Istio 1.0.

Istio provides a method of integrating services like load balancing, mutual service-to-service authentication, transport layer encryption, and application telemetry requiring minimal (and in many cases no) changes to the code of individual services. This is in juxtaposition to other solutions like the various Java libraries from Netflix OSS. Utilizing these libraries requires both the use of Java for development as well as modification to source code, separately integrating these capabilities into each application component. I like to think of Istio as another component in your application stack, providing this functionality without extensive code changes.

We thank fellow community members for the hard work in getting Istio to this point, including the teams at Google, IBM, Lyft, Cisco, Covalent, Stripe, and everyone else who has contributed.

"Istio allows large enterprises to reliably deploy, secure and manage services across Kubernetes and VM environments, and we have received lots of great feedback from organizations who are using Istio in production," said Dan Ciruli, Istio steering committee member and senior product manager, Google Cloud. “The Istio 1.0 launch is testament to innovation through collaboration across industry leaders, including Google Cloud and Red Hat."

Getting to know the service mesh layer of the Kubernetes stack

A network of microservices can be complex. Istio is helpful for intelligent routing and load balancing, and enforcing organizational policy between your services and applications. The goal of the service mesh layer is to simplify the cloud native application development and management process. Developers are concerned about making applications that are better connected, more secure, and require less management. A service mesh can aid in testing how applications perform, and how they behave when components within the environment fail. They also help the migration between two versions of an application, may mirror specific segments of traffic for fault testing, and provide more robust traffic handling to your applications. One of the most important uses is to allow testing of your application so you can safely do rolling upgrades with limited downtime.

You may have heard of a number of different options all referred to as an answer for providing a “service mesh.” Ultimately, it is easier to define what a service mesh is by the features it provides and then identify how it achieves those goals.

Looking at the current CNCF projects, there are two “service mesh” incubating projects: Envoy and Linkerd. If one looks at the Linkerd examples git repo you see references to Istio. Finally, if you read the README on the Istio repo, it calls out using Envoy. This seems like a circular dependency, right?

In the end, it’s much simpler than that. A “service mesh” is an additional layer added to an application stack to handle traffic in a more elastic way. Istio deploys a number of proxy servers which are colocated with your application components, and then uses these proxy servers to perform its various functions.

The proxy system is pluggable, allowing for the use of a number of conformant software packages. These packages include Envoy, Linkerd and others (in the same way that one can use either CRI-O or docker as the container engine within OpenShift). All told, Istio provides an API for managing this entire mesh including the individual proxy containers.

What’s new in Istio 1.0

Since the v0.8 release, Istio 1.0 brings a number of improvements including better handling of role based access controls (RBAC), improved transport layer security (TLS) handling, component stabilization, increased (and refactored) test suites, and a comprehensive testing effort by the community. For more detail on what the 1.0 release entails, check out the related post on the CoreOS blog.

In the future

We understand the rising interest in Istio and service mesh in general for the Kubernetes community. Our goal is to make Istio work with Kubernetes and OpenShift as a first class citizen so you won’t need to be an expert and can focus on your application, freeing your team from focusing on minutiae better solved and executed with software. In OpenShift, we plan to address this need in a future release.

Istio will be coming soon in Tech Preview shortly after the release of OpenShift 3.10, with more functionality planned in future releases.

Join us for an OpenShift Commons briefing on August 7 at 12 noon ET (and available for replay) for a deeper dive on the project.

Brian "Redbeard" Harrington is product manager, Istio, at Red Hat