gRPC vs REST
REST (Representational State Transfer) is an architectural style that provides guidelines for designing web APIs, using the standard HTTP 1.1 methods (GET, POST, PUT, DELETE, etc), with the intent of providing a method of communication between systems. When multiple requests reach the server, it handles them one at a time in one of several formats.
gRPC (Remote Procedure Call) is an open-source data exchange technology developed by Google using the HTTP/2 protocol. It provides rules a developer must follow to develop or consume web APIs, by defining a .proto file as the API contract. gRPC also allows streaming communication and serves multiple requests simultaneously without establishing new TCP connections. Using the Protocol Buffer binary format (Protobuf) for data exchange, the messages provided are automatically converted to the desired programming language.
The difference highlighted here is the ability of gRPC to send and receive multiple requests with the same TCP connection.
Why people may need to access services via both styles
The smaller on-the-wire payload and the ability to multiplex calls over a single connection help with the latency of an individual request. The highly parallel and reactive nature of many gRPC implementations often results in better throughput performance relative to similar implementations in REST, so overall, in the event of high volumes of transactions, gRPC outperforms REST.
When integrating services, developers may discover a need to transcode their services from gRPC to REST. This blog post aims to outline a solution for that use case.
Tools needed for implementation
For local testing and development:
For deploying on a Kubernetes cluster:
- Kubernetes cluster (i.e. Red Hat OpenShift)
- Istio (i.e. Red Hat OpenShift Service Mesh)
- gRPC service and Protobuf file(s)
- API client (i.e. Postman)
Tutorial
This solution utilizes Envoy, an L7 proxy and communication bus, an extended version of which is implemented within Istio. Using an Envoy filter, we can build a transcoder that allows a RESTful JSON API client to send requests over HTTP and get proxied to a gRPC service. The transcoder does this by encoding the message output of a gRPC service method into JSON and setting the HTTP response Content-Type header to application/json.
Example Content-Type header from REST and gRPC API responses respectively:
Testing locally with Envoy
The following steps outline what you need to do to develop and test on your local machine:
Clone the repo or generate your own.
For this tutorial, we are using a hello world gRPC service generated with the Java Quarkus framework. Clone this repo or generate your own at code.quarkus.io.
Add the google Protobufs to your project.
For Envoy to understand the HTTP mappings, you also need to include a couple Protobuf files from the googleapis repository from Github, as you’ll need them in the include path. If you cloned the repository above, you’ll notice these files are already located in the directory src/main/proto/google/api. If using your own generated project, you can copy these files from that location or from the googleapis repository.
Add the necessary annotations to your Protobuf file(s).
Using the examples in the http.proto and the Google documentation, annotate your service methods with the corresponding REST endpoints and methods. Some of them can get quite complex and are explained in the linked documentation. The following is a simple example from the hello.proto Protobuf file:
Generate a service descriptor set with protoc.
You can now use protoc to generate a binary file that includes all of your gRPC service methods. Create a directory for your descriptor set and run this command in the root of your project:
protoc -I./src/main/proto --include_imports --include_source_info --descriptor_set_out=./src/main/envoy/hello-proto.pb ./src/main/proto/hello.proto
This command will generate the file and insert it in whatever location to which you set the descriptor_set_out option.
Update the Envoy configuration file with the location of the generated service descriptor set and the services in your Protobuf file.
In the src/main/envoy directory, you will see an Envoy configuration file: envoy-config.yaml. This file provides a sample configuration that proxies to a gRPC server running on localhost:50051. This assumes your Quarkus gRPC service is running at localhost:9000. Update lines 31-32 to reflect your descriptor set location and Protobuf services:
Start the Envoy proxy server and your application.
After ensuring your gRPC service runs and is accessible at localhost:9000, use the following command to start the Envoy proxy:
envoy -c ./src/main/envoy/envoy-config.yaml
Test with an API client.
Using Postman or any other RESTful API client, test that you can get a healthy response from http://localhost:51051/say/{name}:
Note that you can still access the same service via gRPC as well:
You can now access your service via both REST and gRPC. To add more REST-friendly methods, you would need to annotate those as well. Remember to rebuild the descriptor set with protoc with each change to the proto files.
Testing with OpenShift and Service Mesh (Istio)
If you want to deploy this solution on an OpenShift cluster, you will need a few additional steps:
Install the necessary operators to set up Red Hat OpenShift Service Mesh.
Apply these manifests to install the operators or follow the documentation.
Configure the necessary resources to deploy an externally accessible gRPC service within the mesh.
You will need a ServiceMeshControlPlane, ServiceMeshMemberRoll, Gateway, Ingress, and VirtualService. See these reference manifests for examples.
Enroll your application namespace in the mesh.
Enroll any namespaces where you want to deploy the gRPC service. This is done by adding the namespace to the ServiceMeshMemberRoll.
Build your app, build your image, push it to a repository, and deploy the gRPC service.
Build and push your image, then apply the application manifests to the namespace you enrolled in the mesh.
Encode your proto descriptor set.
After generating the descriptor set with protoc (you should have done this already if you followed the steps to do this locally), encode that file:
base64 ./src/main/envoy/hello-proto.pb | tr -d '\n\r' > ./src/main/envoy/hello-proto.pb.b64
Add the necessary information to an EnvoyFilter manifest.
Add the list of your Protobuf services and the content from the encoded binary file to an EnvoyFilter manifest with these yq commands (or simply copy and paste):
yq eval '.spec.configPatches[0].patch.value.typed_config.services="hello.HelloGrpc"' --inplace kubernetes/EnvoyFilter.yaml
yq eval '.spec.configPatches[0].patch.value.typed_config.proto_descriptor_bin=“‘“$(cat ./src/main/envoy/hello-proto.pb.b64)”’”' --inplace kubernetes/EnvoyFilter.yaml
Apply the EnvoyFilter.
Apply the EnvoyFilter resource to the same namespace as the gRPC service Deployment using this command:
oc apply -f kubernetes/EnvoyFilter.yaml -n <namespace>
This EnvoyFilter resource customizes the Envoy sidecar configuration, and the Envoy sidecar handles the proxying for the gRPC service.
Test with an API client
REST:
Automation
To further optimize the solution, you may want to include automation of these scripts in your CI/CD implementation. For example, Tekton, or OpenShift Pipelines, can be used along with Helm to automate these tasks so that the latest descriptor set is always applied to the EnvoyFilter resource on the cluster.
Conclusion
In this post, we saw how to make gRPC services compatible with REST using minimal code changes. We leveraged the capabilities of Envoy and Istio to provide a way for any clients hoping to communicate with services externally to use whichever API architecture they choose. This solution can be automated and added as a step in any CI/CD pipeline so that all services can be transcoded upon deployment to any environment.
À propos des auteurs
Raffaele is a full-stack enterprise architect with 20+ years of experience. Raffaele started his career in Italy as a Java Architect then gradually moved to Integration Architect and then Enterprise Architect. Later he moved to the United States to eventually become an OpenShift Architect for Red Hat consulting services, acquiring, in the process, knowledge of the infrastructure side of IT.
Currently Raffaele covers a consulting position of cross-portfolio application architect with a focus on OpenShift. Most of his career Raffaele worked with large financial institutions allowing him to acquire an understanding of enterprise processes and security and compliance requirements of large enterprise customers.
Raffaele has become part of the CNCF TAG Storage and contributed to the Cloud Native Disaster Recovery whitepaper.
Recently Raffaele has been focusing on how to improve the developer experience by implementing internal development platforms (IDP).
Contenu similaire
Parcourir par canal
Automatisation
Les dernières nouveautés en matière d'automatisation informatique pour les technologies, les équipes et les environnements
Intelligence artificielle
Actualité sur les plateformes qui permettent aux clients d'exécuter des charges de travail d'IA sur tout type d'environnement
Cloud hybride ouvert
Découvrez comment créer un avenir flexible grâce au cloud hybride
Sécurité
Les dernières actualités sur la façon dont nous réduisons les risques dans tous les environnements et technologies
Edge computing
Actualité sur les plateformes qui simplifient les opérations en périphérie
Infrastructure
Les dernières nouveautés sur la plateforme Linux d'entreprise leader au monde
Applications
À l’intérieur de nos solutions aux défis d’application les plus difficiles
Programmes originaux
Histoires passionnantes de créateurs et de leaders de technologies d'entreprise
Produits
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Services cloud
- Voir tous les produits
Outils
- Formation et certification
- Mon compte
- Assistance client
- Ressources développeurs
- Rechercher un partenaire
- Red Hat Ecosystem Catalog
- Calculateur de valeur Red Hat
- Documentation
Essayer, acheter et vendre
Communication
- Contacter le service commercial
- Contactez notre service clientèle
- Contacter le service de formation
- Réseaux sociaux
À propos de Red Hat
Premier éditeur mondial de solutions Open Source pour les entreprises, nous fournissons des technologies Linux, cloud, de conteneurs et Kubernetes. Nous proposons des solutions stables qui aident les entreprises à jongler avec les divers environnements et plateformes, du cœur du datacenter à la périphérie du réseau.
Sélectionner une langue
Red Hat legal and privacy links
- À propos de Red Hat
- Carrières
- Événements
- Bureaux
- Contacter Red Hat
- Lire le blog Red Hat
- Diversité, équité et inclusion
- Cool Stuff Store
- Red Hat Summit