Skip to main content

How to deploy a web service on OpenShift

Learn how to containerize an application, create a deployment, and expose the service using HTTP.
Image
5 advantages of Linux containers

Photo by freestocks.org from Pexels

OpenShift is a powerful platform that allows users to manage and scale applications easily in a containerized environment. While OpenShift is a great tool with many different capabilities, this tutorial focuses on doing the bare minimum to get an API up and exposed on a cluster.

[ Learn everything you need to know about Kubernetes. ]

This tutorial assumes that you have basic knowledge of container engines and API development.

Prerequisites

Ensure you meet the following prerequisites to follow the steps in this tutorial:

The first few sections of this guide walk through how to containerize and publish your application to an image registry. If you want to use the sample application linked above and skip these steps, skip ahead to Create a deployment.

Containerize your application

So you've developed a web service—now what? First, you'll need to add a Containerfile to your project's root so that Podman can properly build a container image for your application. The exact contents of your Containerfile will depend on your application, but here is an example from the sample application I'm using:

FROM golang:alpine AS build-env
RUN mkdir /go/src/app && apk update && apk add git
ADD main.go go.mod go.sum /go/src/app/
WORKDIR /go/src/app
RUN go mod download && CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .

FROM scratch
WORKDIR /app
COPY --from=build-env /go/src/app/app .
ENTRYPOINT ["./app"]

Use a multistage build in this situation. Note that you create two images in this Containerfile. The first image contains the necessary tools to build and compile the application. After you've built the application, you no longer need to hang on to any of those build tools and can copy the artifact to an empty "scratch" image. This makes for a smaller image—which means faster load times in the cluster—and a safer image since no lingering tools could be compromised.

[ Download now: Podman basics cheat sheet ]

Now you're ready to build the image. Do this by running the following command in your terminal from the root directory of your project:

$ podman build . -t web-service-gin --platform linux/amd64
…
Successfully tagged quay.io/avulaj/web-service-gin:latest
acd11248b7c2140245d06f3ace4418076201a82642852cf0b01ed8185e580603
  • podman build is the root command.
  • The next argument represents the directory where your Containerfile is located. Use . since you are currently in the project's root directory.
  • -t represents the name that will be assigned to the image. In this case, choose web-service-gin.
  • --platform specifies that you are building this image for linux/amd64, which is required for OpenShift pods. Note that this will increase the time it takes to build your image if it's running on a different platform.

As shown above, this command should output an image ID. Note this ID, as you'll need it later.

Now you can use this image to test your application locally in a Podman container. Do this with the following command:

$ podman run -d --rm -p8080:8080 <image_id>
  • podman run is the root command.
  • -d is optional and runs the container in the background. Without this flag, you'll need a separate terminal to run commands.
  • --rm is optional and tells Podman to automatically remove the container after it exits.
  • -p specifies a container port (or range of ports) to publish to the host. This will depend on how your application is built. In my case, I've exposed my application on port 8080 and published it to the host as such.
  • image_id is the ID that was provided after building your image. If you've lost this ID, you can run podman images to fetch it again.

This command will output a container ID—you'll need this later to clean up the running container.

At this point, your API should be available at localhost on your machine. For example, if you've been using my sample application for this tutorial, you should see the following output:

$ curl localhost:8080
"Hello"

Congratulations! You've successfully containerized your application, and you're ready to move on to the next step. You can safely stop your container with the following command:

$ podman stop <container_id>

[ Want to test your sysadmin skills? Take a skills assessment today. ]

Publish your image

Now that you've built an image for your application, you need to publish it somewhere so that you can create an OpenShift deployment with it. For this tutorial, you'll use Quay. If you have a Quay account, log into it now. Otherwise, create a new one.

Once logged in to the web console, create a new public repository with a name of your choosing. After you create the repository, push the image to it with the following command:

$ podman push <image_id> quay.io/<username>/<repository_name>:<tag>
  • podman push is the root command.
  • <image_id> is replaced with the ID of the image you built in the previous section.
  • <username> is your username on quay.io.
  • <repository_name> is the name you chose for your repository.
  • <tag> is a custom tag to identify your image. This can be anything you choose to identify your image in your repository. If in doubt, follow semantic versioning to be safe.

Create a deployment

So you've built your REST API, containerized it, and even published it to an image registry. Now for the fun part—deploying the application on an OpenShift cluster.

In OpenShift, a Deployment object describes how to create or modify pods that hold a containerized application by defining the desired state of a particular component. Deployments create and manage how ReplicaSets are deployed. ReplicaSets orchestrate pod lifecycles and guarantee the availability of a specified number of identical pods. Deployments can be represented as YAML files, often kept within the project itself to ensure consistency across different environments.

[ Get the YAML cheat sheet. ]

While the fine details of a deployment file vary by application, here is the deployment file used in this sample application:

apiVersion: apps/v1
kind: Deployment
metadata:
 labels:
   app: gin-app
 name: gin-app
spec:
 replicas: 2
 selector:
   matchLabels:
     app: gin-app
 template:
   metadata:
     labels:
       app: gin-app
   spec:
     containers:
       - image: quay.io/avulaj/web-service-gin
         imagePullPolicy: Always
         name: gin-app
         ports:
           - containerPort: 8080

There's a lot here, so I'll break apart the parts of this file:

  • .metadata.labels a set of key/value pairs used to specify attributes for object organization.
  • .metadata.name indicates the deployment's name, gin-app in this case.
  • .spec.replicas specifies how many replicas of the application will be available.
  • .spec.selector defines how the ReplicaSet created from the deployment will find pods to manage.
  • template defines the following properties for the PodSpec for the created pods:
    • .metadata.labels.app specifies the pod labels.
    • .spec defines one container for the pods, gin-app, which will run quay.io/avulaj/web-service-gin. It will always pull the specified image when starting a new pod and exposes port 8080 on the host.

[ Get this complimentary eBook from Red Hat: Managing your Kubernetes clusters for dummies. ]

Apply the deployment

Now, you're ready to apply the deployment to a cluster. Start by logging into your cluster through the OpenShift CLI. Then, create a new project with the following command:

$ oc new-project <project name>

Note that running new-project also moves you into the newly created namespace. For the sake of this tutorial, I've named my project gin-app.

Next, apply the deployment to the cluster:

$ oc apply -f <path to deployment file>

If this works, you should see your pods running in the namespace you created. Here is a sample output from applying the deployment from the previous section:

$ oc get pods
NAME                       READY   STATUS    RESTARTS   AGE
gin-app-76dc4f79b6-nxldg   1/1     Running   0          53s
gin-app-76dc4f79b6-x6jhb   1/1     Running   0          53s

Create a service

So now the application is running on pods in the cluster, but that's not too valuable if you can't make any requests to it. You need a service.

A Service object serves as a load balancer for the pods it manages and creates a DNS entry for other pods within the cluster to access it. While you can use a custom YAML file to create your service (similar to how you created a deployment), you can use OpenShift to create a service for your ReplicaSet by simply exposing the deployment:

$ oc expose deployment <deployment name>

If you've forgotten the name of your deployment, you can retrieve it with:

$ oc get deployment

Verify you created the service with:

$ oc get svc

And if you want to see more details about your service, see the generated YAML with:

$ oc get svc <svc name> -oyaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2022-11-30T21:04:51Z"
  labels:
    app: gin-app
  name: gin-app
  namespace: gin-app
  resourceVersion: "14092996"
  uid: 6f45b1a4-436e-45fc-99bd-239461b08880
spec:
  clusterIP: 172.30.19.7
  clusterIPs:
  - 172.30.19.7
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: gin-app
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Create a route

You deployed the application and created a service in place to manage traffic. All that's left is to create a route. Routes in OpenShift allow applications to be hosted at a public URL. This example uses a secured route with default encryption. In the OpenShift CLI, run:

$ oc create route edge --service=gin-app --insecure-policy=Redirect
  • oc create is the base command.
  • route is a subcommand of create used to expose a container externally.
  • edge is a subcommand of route that declares you want the route to use edge TLS termination.
  • --service specifies which service you are exposing a route to.
  • --insecure-policy specifies what to do with insecure traffic. It is not required, but for the sake of this demo, it will redirect HTTP traffic to HTTPS.

Verify the route's creation:

$ oc get routes -o yaml <name of resource>

With the route created, you can finally test the API. The previous command should have shown your public URL at .spec.host, however, you can also get it more directly with:

$ oc get route <route name> -ojson | jq -r '.spec.host'

Then, you can curl this URL and see that the API responds:

$ curl -L gin-app-gin-app.apps.avulaj-test.0f83.s1.devshift.org/albums
[
    {
        "id": "1",
        "title": "Blue Train",
        "artist": "John Coltrane",
        "price": 56.99
    },
    {
        "id": "2",
        "title": "Jeru",
        "artist": "Gerry Mulligan",
        "price": 17.99
    },
    {
        "id": "3",
        "title": "Sarah Vaughan and Clifford Brown",
        "artist": "Sarah Vaughan",
        "price": 39.99
    }
]

Wrap up

This tutorial took an application from scratch, deployed it on an OpenShift cluster, and publicly exposed it with HTTP. From containerizing the application to building a deployment and eventually exposing a service with a route, OpenShift provides a straightforward process to serve your web services publicly.

All the code for this guide is available in my GitHub repository.

[ Complimentary eBook: Podman in Action ]

Topics:   OpenShift   Containers   Kubernetes   Podman  
Author’s photo

Alex Vulaj

Alex is a Senior Site Reliability Engineer for Red Hat OpenShift. He has spent the majority of his career designing and building out web services and APIs on various cloud providers. More about me

Try Red Hat Enterprise Linux

Download it at no charge from the Red Hat Developer program.