Blog da Red Hat
We’re excited to announce our participation in the Service Mesh Interface, a collaboration with Microsoft and others on a specification to make it easier for service meshes to run on Kubernetes.
Kubernetes has established itself as the cloud native equivalent of an operating system. Thus, it’s important to allow composability and flexibility as to the implementation details. While Fedora and Red Hat Enterprise Linux both have their origins in source control, the flexibility of the Linux kernel and it’s interfaces allow us to use those same sources to compose different operating systems in a manner that meets the needs of different users.
Customers and community members alike have been seeking a way to better standardize the configuration and operation of service meshes. With the beginning of the Service Mesh Interface (SMI), we see this as a way to help maximize choice and flexibility for our Red Hat OpenShift customers.
“Service Mesh Interface defines a set of common, portable APIs for developers to use in a provider-agnostic manner. As service mesh technology continues to evolve, the interoperability provided by SMI can help the emerging ecosystem of tools and utilities that integrate with existing mesh providers. Working alongside Kubernetes leaders like Red Hat on SMI helps customers and the community get the flexibility they need thanks to using standard interface for service meshes on Kubernetes,” said Gabe Monroy, Lead Program Manager, Containers, Microsoft Azure.
Flexibility and interoperability with the Service Mesh Interface
The Service Mesh Interface (SMI) is a community specification for service meshes that run on Kubernetes, designed to enable flexibility and interoperability. It defines a common standard that can be implemented by a variety of providers, helping to bring both standardization for end-users and innovation by service mesh providers. SMI is designed to be an ecosystem-friendly solution that provides consistent APIs for users to use and build on service mesh technologies.
SMI builds on the industry concepts of helping to maintain a baseline of common APIs to enable composability and innovation in the container ecosystem. It follows in the footsteps of existing Kubernetes resources like the Container Networking Interface (CNI), which is a specification started in 2014 that enables a common interface for the network connectivity of containers and is used by many today. Those that provide tools and service meshes can use SMI APIs directly or build on these to translate SMI to native APIs.
At Red Hat, we want to make it easy for customers of Red Hat OpenShift, the industry’s most comprehensive enterprise Kubernetes platform, to focus on their higher level needs while OpenShift provides users the flexibility and ability to prioritize functionality over implementation details. As we recently announced in OpenShift 4 Service Mesh, this takes Istio and combines it with other key projects, like Jaeger for tracing and Kiali for visualization, to provide better manageability and traceability to microservices deployments. Developers can focus on building the business logic, letting the service mesh manage how each microservice communicates based on policies they define. They can also leverage the tracing and visualization capabilities to debug issues when they occur.
As SMI matures as a standard interface for meshes on Kubernetes, we plan to utilize it’s capabilities across OpenShift Service Mesh so our developers have the flexibility of a common API to build on and our customers have a consistent experience as we evolve and improve the product.
Check out SMI
Overall, SMI offers benefits to vendors and the community ecosystem because we are working together to help customers get started with service mesh technology quickly with a common set of capabilities to enable flexibility and innovation by service mesh providers.
To check out the specification and get involved in SMI, visit smi-spec.io.