Earlier this year, we announced a new Operator for Istio on Red Hat OpenShift available as Developer Preview for the next generation of OpenShift Service Mesh - version 3. That post provides important background on the changes coming to OpenShift Service Mesh in 2024. Since then, we have continued to develop the Sail Operator while supporting customers on OpenShift Service Mesh 2 and collecting feedback on our Service Mesh 3 plans. While the new Operator remains Developer Preview, this post will provide an update, discuss future plans, and offer initial guidance on how OpenShift Service Mesh users can prepare for OpenShift Service Mesh 3.

Service Mesh 3.0 updates

The Service Mesh 3 Kubernetes Operator is currently being developed as the “Sail Operator” and is available as a community Kubernetes Operator on OpenShift's Operator Hub. The Sail Kubernetes Operator is updated nightly, so it remains a work in progress and is subject to change. It may evolve to be different from what this blog post describes, so please only use it for experimentation at this time. See the included README for the latest information on the Kubernetes Operator.

We plan to move this community Kubernetes Operator to the upstream istio-ecosystem organization for greater community collaboration while contributing enhancements to the core Istio project to improve the compatibility of Istio on OpenShift. The downstream product artifacts of OpenShift Service Mesh 3 will reside in the newly created openshift-service-mesh organization, while the maistra organization will continue to be used for Service Mesh 2.

A Kubernetes Operator for Istio… and only Istio

As discussed in the previous blog post, OpenShift Service Mesh 3 will be based on a new Kubernetes Operator for Istio. Unlike the current OpenShift Service Mesh 2 Kubernetes Operator, the new Kubernetes Operator will only manage Istio resources and will not attempt to configure Istio integrations like Kiali. Complementary components, such as Kiali, metrics, tracing, and others, will be managed by independently supported product Kubernetes Operators.

When the Sail Operator was first released, the custom resource for installing the Istio control plane was called IstioHelmInstall. This resource has been renamed "Istio" to reflect that it is responsible for creating and managing a single instance of Istio (a control plane and a data plane).

Unlike the ServiceMeshControlPlane custom resource used in OpenShift Service Mesh 2, the Istio resource uses upstream Istio's Helm values to define Istio configuration. This makes it easier for users to translate community configuration examples to OpenShift Service Mesh 3. It also helps to make sure our future efforts to improve configuration will be done in collaboration with the Istio community. We have not ruled out a future convergence with the community's Istio Operator API, which is used with istioctl's installation process and the discouraged upstream Istio operator installation.

Our next efforts will include refining the organization of configuration and enhancements to better support features such as Istio's revisions, canary-style upgrades, and multi-tenancy. Please refer to the Kubernetes Operator's README file for the most up to date information.

Selecting releases

When we first released the Sail Operator, it would deploy the latest version of Istio, effectively the Master branch of Istio under development. While this was convenient for experimenting with the latest from the Istio community, in most cases, users should use an official release of Istio to be sure of stability and compatibility with istioctl and integrations like Kiali.

We now default to the latest version of Istio—at this time, 1.20. Configure this with the version field in the Istio CRD. When creating a new instance with the OpenShift console, there is now a drop-down menu to select from a list of available Istio releases. The available Istio releases are defined in the versions.yaml file, which will be updated for each newly available Istio release.

Istio Version Selector Drop-Down Menu

The future OpenShift Service Mesh 3 product Kubernetes Operator, which will be based on the Sail Operator, will manage releases of OpenShift Service Mesh in a similar manner. While this version field is similar to the version field of the ServiceMeshControlPlane resource in Service Mesh 2.x, a notable difference is that users can specify a version down to the Z "patch" release level (e.g., 3.1.1). While we will only support the latest patch releases of OpenShift Service Mesh, this capability will allow users to pin or roll back to a particular "z" patch release, providing greater control and flexibility for managing patch updates.

Configuration validation

The primary field for configuring Istio with the new CRD is the values field. This powerful field enables users to reference Istio Helm configuration values directly. We have added validation to this field to catch non-existent configuration values and other configuration errors based on upstream Istio's protobuf validations.

These validations also enable managing the values field as follows:

$ oc explain istio.spec.values
KIND:     Istio
VERSION:  operator.istio.io/v1alpha1
RESOURCE: values <Object>
    Values defines the values to be passed to the Helm chart when installing
  base <Object>
  cni <Object>
  defaultRevision <string>
  global <Object>
  istio_cni <Object>
  istiodRemote <Object>
  meshConfig <>
  ownerName <string>
  pilot <Object>
  revision <string>
  revisionTags <[]string>
  sidecarInjectorWebhook <Object>
  telemetry <Object>
  ztunnel <>

As there may be times when it is desirable to override these validations—for example, to access experimental configuration that is not yet part of Istio's protobuf schema—we have also included a rawValues field, which is identical to values, except that it is not validated.

Note that the Istio resource, values, and rawValues fields remain experimental and are subject to change. Refer to the project README for the latest information.

Istio status and configuration

You should validate the status of your control plane once you've applied the Istio configuration. Do this using the following command:

$ kubectl get istioundefinedundefinedundefined


istio-sample True  Healthy v1.20.0 62sundefinedundefined

Or, use the status field:

Istio Custom Resource Definition

When expanded, you can use the status.appliedValues field to validate that the configuration was applied as expected using the spec.values field.

Istio on OpenShift

As part of our initiative to converge with the community Istio, we continue to contribute to upstream Istio to improve the compatibility of Istio on OpenShift. This is for our own benefit (to simplify our work to productize Istio) and for the community, customers, and partners. Our contributions make running community Istio on OpenShift easier while providing a seamless onboarding path to our supported OpenShift Service Mesh.

An example of this effort was removing the need to grant the anyuid Security Context Constraint (SCC) privilege to the Istio control plane and data plane components, as recently highlighted in Istio 1.20. We will make similar contributions on an ongoing basis, the most significant of which will be an effort to make Istio's Ambient mesh work on OpenShift.

Gateway best practices

When this Kubernetes Operator was announced, it automatically installed gateways, similar to the default Istio installation configuration profile. This is consistent with OpenShift Service Mesh 2.x, which creates a default ingress and egress gateway called istio-ingressgateway and istio-egressgateway, respectively.

While these auto-created gateways are convenient for getting started, they do not provide the configurability necessary for production environments. We also feel strongly that gateways are better created and managed with their applications in the data plane rather than the control plane. Gateways created and managed with their applications are a better security practice, limiting the scope of a gateway to a smaller set of applications and allowing it to adopt the lifecycle of its applications rather than the control plane.

Thus, we have opted to remove these control plane gateways in favor of guiding users to create gateways with their applications using either gateway injection or the Kubernetes Gateway API. istio-ingressgateway and istio-egressgateway, as specified in OpenShift Service Mesh 2.x's ServiceMeshControlPlane, will not be included in Service Mesh 3.0. Instead, we will provide example configurations of gateways for the Bookinfo application using gateway injection and the Kubernetes Gateway API.

With gateway injection, gateways are created and managed like any other workload on Kubernetes or OpenShift using a Deployment resource that is injected with an Envoy proxy. This approach gives complete control of the gateway to the application owner. It is the recommended way to create and manage gateways in OpenShift Service Mesh 2.3 and beyond.

With the Gateway API, in Technology Preview as of OpenShift Service Mesh 2.4, a gateway "Deployment" instance is created and configured with each gateway instance.

These options allow gateways to be created and managed with applications, ideally as part of a GitOps workflow.

Kubernetes Gateway API

The Kubernetes Gateway API represents the next generation of APIs for modeling networking in Kubernetes. Compared to the current Kubernetes Ingress API, it provides substantially more flexibility and extensibility for managing networking across a large organization. While initially intended to manage north/south traffic from clients outside the cluster, it has grown to include east/west traffic, including Service Mesh. The GAMMA initiative was created to define how the Gateway API can cover service mesh use cases. Istio now includes the Gateway API configuration examples for most documented tasks, such as Traffic Management.

While Gateway API remains a Technology Preview feature in OpenShift Service Mesh 2.4 and must be enabled with a feature flag, it is now generally available in the community. Version 1.0 of the API is available in Istio 1.20 (which will be included with OpenShift Service Mesh 2.6 and beyond). Istio plans to make the Gateway API the default API for all traffic management in the future while continuing to support Istio APIs (Gateway, VirtualService, DestinationRule, etc.) for the foreseeable future.

Our excitement around the Gateway API project's potential to provide a common standard for Kubernetes networking goes well beyond service mesh.

We are developing a Gateway API-based implementation of OpenShift Ingress users can deploy independently of a service mesh via the Gateway API Ingress operator. For more information on this work and Gateway API, see this blog post and the more recent update.

Meanwhile, the team that brought you the 3Scale API Management is working on the Kuadrant.io project that will leverage the Gateway API to address use cases around how external traffic enters an Ingress Gateways, such as multi-cluster connectivity, global load balancing, rate limiting, authorization, and more. Look for information on this project in an upcoming blog post.

Get started with Istio add-ons like Kiali

A notable change in OpenShift Service Mesh 3.0 is that the Kubernetes Operator will only manage Istio. Integrations such as Kiali, distributed tracing, and metrics will be installed and managed independently. While this will add steps to the "getting started" experience, we feel the trade-off of having more modularity and flexibility in configuring these components will be worth it.

To help users get up and running quickly, we have added instructions to the Kubernetes Operator README for setting up Istio with Istioctl, sample Gateways, Prometheus, Jaeger, and Kiali. This provides a demo/development environment roughly equivalent to what OpenShift Service Mesh 2.x offers out of the box today. It also provides a preview of the installation workflow we plan to deliver in OpenShift Service Mesh 3, where Istio will be installed on its own, and the add-ons are installed independently. Of course, the supported Service Mesh 3.0 will use supported product Kubernetes Operators for each of the Istio add-ons with a supported distribution of Istioctl. These community Istio add-on configurations are for demonstration/development purposes only and should not be used for production environments.

Preparing for Service Mesh 3.0

There are several things that OpenShift Service Mesh 2 users can do today to prepare to adopt Service Mesh 3.0.

It's important to remember that OpenShift Service Mesh 3 will continue to be based on Istio, and Istio's stable APIs likely to be used by end users will not change. What is changing in OpenShift Service Mesh 3 and will require migration are control plane configuration resources such as ServiceMeshControlPlaneServiceMeshMemberRoll, and ServiceMeshMemberRoll. These resources are usually managed by administrators or platform teams rather than application owners. We will continue exploring ways administrators can migrate their existing control plane configurations to Service Mesh 3 configurations.

Application-specific configuration—Istio resources such as VirtualServicesDestinationRules, and even PeerAuthentication—will not change. Thus, users should feel confident that they can begin or expand their OpenShift Service Mesh 2 usage without having to migrate application or service specific configurations when they move to OpenShift Service Mesh 3.

There are some things users can do today to make the move to OpenShift Service Mesh 3.0 easier. In addition to using the latest OpenShift Service Mesh Release (2.4+), users can:

  • Adopt or migrate to gateway injection for creating and managing Istio gateways with their applications rather than with the Istio control plane (which is the default in Service Mesh 2.0). As described above, the control plane in 3.0 will not create gateways.
  • If multiple control planes are not required, use cluster-wide mode. With this mode, the Istiod runs as a cluster-level resource. This will be the default in Service Mesh 3.0, with the possibility of creating multiple control planes using the up-and-coming Multiple-Control Plane feature.
  • Configure Service Mesh to use OpenShift's user workload monitoring or Red Hat Advanced Cluster Management's Observability for capturing metrics. These will provide a production grade monitoring stack with alerting that will be much more configurable and extensible than the metrics stack installed with OpenShift Service Mesh 2.x (and which will be removed in Service Mesh 3).
  • Use externally configured Kiali and Jaeger resources rather than configuring these directly within the ServiceMeshControlPlane resource. Besides providing more flexibility for managing Kiali and Jaeger, these will be configured independently in Service Mesh 3.

We will publish a blog post that goes into more depth on each of these topics at a later date.

What’s next for OpenShift Service Mesh?

Our next release will be OpenShift Service Mesh 2.5 (based on Istio 1.18) in early 2024. We have also decided to do a 2.6 release based on Istio 1.20 or later in 2024 to be sure customers have at least one year of overlapping support to upgrade from OpenShift Service Mesh 2 to 3. The 2.6 release will also give us additional time to collect feedback on OpenShift Service Mesh 3 while it is in a Technology Preview state.

For OpenShift Service Mesh 3, we continue to evolve the new Kubernetes Operator, including tweaking the custom resource definition(s) to manage Istio configuration better and adding features to support canary upgrades of Istio control planes better. We are targeting late Q1 of 2024 for Technology Preview with general availability in the second half of 2024. Of course, these targets are subject to change. We will continue to support customers on OpenShift Service Mesh 2.x until we have an OpenShift Service Mesh 3 that we are proud to make generally available.

Visit this page to learn more about Red Hat OpenShift Service Mesh.

About the author

Jamie Longmuir is the product manager leading Red Hat OpenShift Service Mesh. Prior to his journey as a product manager, Jamie spent much of his career as a software developer with a focus on distributed systems and cloud infrastructure automation. Along the way, he has had stints as a field engineer and training developer working for both small startups and large enterprises.

Read full bio