As enterprises start to adopt their container journey and onboard their applications into the OpenShift Container Platform, application monitoring becomes critical to anticipate problems and discover bottlenecks in a production environment. Application Monitoring is also one of the biggest challenges faced by almost all organizations who are either in the process of or already have migrated their workloads into OpenShift.
The growing adoption of microservices architecture makes monitoring more complex since a large number of applications that are distributed in nature are communicating with each other. What used to be a function or a direct call in a monolithic application is now a network call from one microservice to another. Also, running multiple instances on these microservices as containers adds another layer of complexity.
Starting with OpenShift 4.3, you can use the platform’s monitoring capabilities for your application workloads running on OpenShift. This helps keep the application monitoring centralized. You don’t need to manage an additional monitoring solution as the platform now provides these capabilities.
OpenShift 4.3 gives you the flexibility to extend these application metrics beyond the cluster administrators. This means that an arbitrary user or a developer can set up metrics collection for the applications. See setting up metrics collection for more details.
Let’s take a look at how you can monitor your application in OpenShift 4.3 using the platform’s capabilities by following these 5 steps:
Pre-requisites:
- OpenShift 4.3 cluster is up and running
- You have cluster administrator privileges
- oc client is installed
Step 1: Enable application monitoring in OpenShift 4.3
Login as cluster administrator
Create the cluster-monitoring-config configmap if one doesn't exist already. See configuring the monitoring stack for more details:
oc -n openshift-monitoring create configmap cluster-monitoring-configEdit the configmap to add config.yaml and set techPreviewUserWorkload setting to true:
oc -n openshift-monitoring edit configmap cluster-monitoring-config
This is how the configmap should look:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | techPreviewUserWorkload: enabled: true
Verify by checking whether prometheus-user-workload pods are created and are in running state:
$ oc -n openshift-user-workload-monitoring get pod NAME READY STATUS RESTARTS AGE prometheus-operator-684fcd47b6-bdmpc 1/1 Running 0 144m prometheus-user-workload-0 5/5 Running 1 144m prometheus-user-workload-1 5/5 Running 1 144m
This confirms that OpenShift monitoring is now enabled to monitor application workloads.
Step 2: Deploy a Quarkus microservice with microprofile metrics endpoint
In this example, I am going to use a Quarkus microservice to demonstrate application monitoring capabilities of OpenShift 4.3. Let's use a simple Quarkus microservice which will expose the microprofile metrics using /metrics endpoint. We will configure the OpenShift monitoring to scrape this metrics endpoint in the next steps. If you are interested in the application code, visit the GitHub repository.
Let’s create the OpenShift objects for the Quarkus application using oc apply command. We will create the following objects
- ImageStream
- BuildConfig
- Deployment
- Service
- Route
oc apply -f https://raw.githubusercontent.com/nmalvankar/quarkus-quickstarts/master/microprofile-metrics-quickstart/.openshift/templates/quarkus-application.yaml
Let’s start the Quarkus application build. This will make use of s2i to build a Quarkus application image quarkus-quickstart which will trigger a new deployment and create an application pod.
oc start-build quarkus-quickstart
Verify that the application pod is up and running:
$ oc get pods -n quarkus NAME READY STATUS RESTARTS AGE quarkus-quickstart-1-build 0/1 Completed 0 57m quarkus-quickstart-1-cr7cq 1/1 Running 0 14m quarkus-quickstart-1-deploy 0/1 Completed 0 15m
Once the application pod is up and running, you should be able to access the application metrics at /metrics. The url should look like this - http://<hostname_of_the_route>/metrics
Step 3: Setup ServiceMonitor/PodMonitor to configure OpenShift Monitoring that scrapes the application metrics
To use the metrics exposed by the Quarkus microservice, let’s configure OpenShift Monitoring to scrape metrics from the /metrics endpoint. This can be achieved by using either a ServiceMonitor, a custom resource definition (CRD) that specifies how a service should be monitored, or a PodMonitor, a CRD that specifies how a pod should be monitored. The former requires a Service object, while the latter does not, allowing Prometheus to directly scrape metrics from the metrics endpoint exposed by a pod.
In this case, let’s use a ServiceMonitor CRD for monitoring the Quarkus microservice:
oc apply -f https://raw.githubusercontent.com/nmalvankar/quarkus-quickstarts/master/microprofile-metrics-quickstart/.openshift/templates/quarkus-service-monitor.yaml
Verify that the ServiceMonitor is running:
$ oc get ServiceMonitor -n quarkus NAME AGE prometheus-quarkus-monitor 5m
Step 4: Setup Alerts for Quarkus service
Now, let’s create an alerting rule which will fire alerts based on values of the service metric. In order to demonstrate a simple alert, let's create a rule which will create an alert when the value of metric vendor_cpu_processCpuTime_seconds is greater than 8 seconds:
oc apply -f https://raw.githubusercontent.com/nmalvankar/quarkus-quickstarts/master/microprofile-metrics-quickstart/.openshift/templates/quarkus-alerting-rule.yaml
Verify that the PrometheusRule is created:
$ oc get PrometheusRule -n quarkus NAME AGE quarkus-alert 9m14s
Step 5: Use OpenShift Monitoring to access the metrics of Quarkus microservice
Login to OpenShift Web Console as cluster administrator and verify that OpenShift Monitoring is able to scrape the application metrics as shown in the screenshot below.
Check the Alerts using the AlertManagerUI. Verify that an alert is visible for the Quarkus application once the value of metric vendor_cpu_processCpuTime_seconds is greater than 8 secs. You can also modify the alerting rule to use any other metric.
Note: Application monitoring is currently Tech preview in OpenShift 4.3 and not recommended for production use.
In these 5 simple steps, you can easily monitor your application workloads on OpenShift 4.3 without having to install any additional software. OpenShift 4.3 also allows you to expose custom application metrics for autoscaling. This gives you much needed flexibility to autoscale an application pod based on custom application metrics in addition to cpu and memory usage. OpenShift 4.3 provides a lot of exciting new features and enhancements. See the release notes for more details.
Connect with Red Hat Services
Learn more about Red Hat Consulting
Learn more about Red Hat Training
Join the Red Hat Learning Community
Learn more about Red Hat Certification
Subscribe to the Training Newsletter
Follow Red Hat Services on Twitter
Follow Red Hat Open Innovation Labs on Twitter
Like Red Hat Services on Facebook
Watch Red Hat Training videos on YouTube
Understand the value of Red Hat Certified Professionals
Sull'autore
Altri risultati simili a questo
Implementing best practices: Controlled network environment for Ray clusters in Red Hat OpenShift AI 3.0
Friday Five — December 12, 2025 | Red Hat
Technically Speaking | Platform engineering for AI agents
Technically Speaking | Driving healthcare discoveries with AI
Ricerca per canale
Automazione
Novità sull'automazione IT di tecnologie, team e ambienti
Intelligenza artificiale
Aggiornamenti sulle piattaforme che consentono alle aziende di eseguire carichi di lavoro IA ovunque
Hybrid cloud open source
Scopri come affrontare il futuro in modo più agile grazie al cloud ibrido
Sicurezza
Le ultime novità sulle nostre soluzioni per ridurre i rischi nelle tecnologie e negli ambienti
Edge computing
Aggiornamenti sulle piattaforme che semplificano l'operatività edge
Infrastruttura
Le ultime novità sulla piattaforma Linux aziendale leader a livello mondiale
Applicazioni
Approfondimenti sulle nostre soluzioni alle sfide applicative più difficili
Virtualizzazione
Il futuro della virtualizzazione negli ambienti aziendali per i carichi di lavoro on premise o nel cloud