Subscribe to our blog

This is a guest post by Michaël Morello, principal software engineer at Elastic. 

Now you can run Elasticsearch, Kibana, or the entire Elastic Stack on Red Hat OpenShift with Elastic Cloud on Kubernetes (ECK). It’s the easiest way to get started with the official offering from Elastic. Let’s explore how you can get up and running quickly, as well as how to use ECK for some of the most common use cases.


Now it’s even easier to get the Elastic operator running on OpenShift and integrate it into your OpenShift ecosystem. First, we’ll install the operator through the OpenShift OperatorHub web interface. Then, we’ll see how to leverage the service serving certificates to encrypt the HTTP traffic between the Elastic Stack and the OpenShift components. We'll also see how this makes it easy to create re-encrypt routes to expose Elastic services outside of your OpenShift cluster.

OpenShift comes with preinstalled monitoring components. We’ll show you how to deploy a Metricbeat instance to grab OpenShift cluster metrics, store them in Elasticsearch, and visualize them in Kibana.


To run the following instructions, you must first:

  • Deploy an OpenShift 4.6 cluster with the monitoring stack deployed.
  • Log in as an administrator.
  • Have a dedicated namespace or OpenShift project to hold the Elastic components.

You must create a project named elastic-monitoring:

oc new-project elastic-monitoring

Deploy the Elasticsearch (ECK) operator on OpenShift

The certified Elastic operator is available in the OperatorHub. It only takes a few clicks to install it through the OpenShift console:

  • In the OpenShift web console, go to the left pane and select Administrator in the dropdown menu.
  • Select Operators, then OperatorHub, and search for "Elasticsearch (ECK) Operator":

  • Click on the tile (skip the community version if you want to install the certified operator). Click on Install, leave the default selection, and click again on Install.

Congratulations, the operator is now running on your OpenShift cluster!

The operator is deployed in the openshift-operators namespace. To get its status from the command line, run the following command:

$ oc get pods -n openshift-operators -l control-plane=elastic-operator
NAME                               READY   STATUS    RESTARTS   AGE
elastic-operator-bc7bbd885-j2sth   1/1     Running   0          53m

To get the operator logs, run this command:

$ oc logs -l control-plane=elastic-operator  -n openshift-operators -f
{"log.level":"info","@timestamp":"2020-11-16T09:10:57.231Z","log.logger":"association.kb-es-association-controller","message":"Starting reconciliation run","service.version":"1.3.0+6db1914b","service.type":"eck","ecs.version":"1.4.0","iteration":10,"namespace":"openshift-monitoring","kb_name":"kibana"}
Deploy an Elasticsearch cluster and Kibana
We want to deploy Elasticsearch to collect metrics from your OpenShift cluster and use Kibana to visualize them.
Let’s deploy an Elasticsearch cluster with three data nodes. To make sure that the settings allow the Elasticsearch cluster to handle at least 100GB of data, apply the following manifest:
cat <<EOF | oc apply -f -
kind: Elasticsearch
name: elasticsearch
namespace: elastic-monitoring
version: 7.10.0
- name: default
  count: 3
        - name: elasticsearch
            - name: ES_JAVA_OPTS
              value: -Xms4g -Xmx4g
              memory: 8Gi
              cpu: 1
              memory: 8Gi
    - metadata:
        name: elasticsearch-data
          - ReadWriteOnce
            storage: 100Gi
        storageClassName: standard
    node.roles: [ "master", "data" ] false

If you want more information on how to customize the volume claim or the podTemplate, see  the documentation.

To visualize your metrics through dashboards, deploy a Kibana instance, associated with the Elasticsearch cluster that was previously created:

cat <<EOF | oc apply -f -
kind: Kibana
name: kibana
namespace: elastic-monitoring
version: 7.10.0
count: 1
  name: elasticsearch

With the elasticsearchRef parameter, an encrypted connection between Kibana and Elasticsearch is automatically established. Make sure that the status for both Elasticsearch and Kibana is green:

% oc get es,kb -n elastic-monitoring
NAME                                                       HEALTH   NODES   VERSION   PHASE   AGE   green    3       7.10.0    Ready   52m

NAME                                  HEALTH   NODES   VERSION   AGE   green    1       7.10.0    48m

In the next section, we’ll see how to expose Kibana and encrypt the traffic from your browser to Kibana using a re-encrypt route.

Securing traffic with the service serving certificates

We now want to access Kibana with a web browser. Using a re-encrypt route is a common solution on OpenShift. Re-encrypt routes allow you to manage potentially sensitive public certificates at the router level, while still relying on a custom and private certificate authority at the pod level:

Let's see how to create a re-encrypt route and create a trust-relationship between the router and Kibana.

The OpenShift Service CA Operator is an operator installed on OpenShift that helps to make communications between services in an OpenShift cluster more secure. Certificates issued by the OpenShift Service CA Operator are trusted by other OpenShift services. It helps to encrypt the traffic with other components like routers or the Prometheus server, as we'll see later in this blog post. By default, the Elastic operator deploys its own certificate authority to encrypt the HTTP traffic, but it can also delegate that task, and load the TLS key and certificate from any Secret.

First we need to set the right annotation on the Kibana service generated by the operator to let the OpenShift Service CA know that we want a TLS certificate for that service. Then we need to update the Kibana manifest to use that certificate. All of that can be done in the Kibana manifest:

cat <<EOF | oc apply -f -
kind: Kibana
name: kibana
namespace: elastic-monitoring
version: 7.10.0
count: 1
  name: elasticsearch
        # request ECK to create the Kibana service with the following annotation "kibana-openshift-tls"
      # use the previously created Secret to on the Kibana endpoint
      secretName: kibana-openshift-tls

The Kibana service is now using a certificate signed by the OpenShift certificate authority. To check it, run the following command in the Kibana pod:

bash-4.4$ curl --insecure -vvI
* Server certificate:
*  subject: CN=kibana-kb-http.openshift-monitoring.svc
*  start date: Nov 16 10:08:01 2020 GMT
*  expire date: Nov 16 10:08:02 2022 GMT
*  issuer: CN=openshift-service-serving-signer@1605510277

Create the re-encrypt route and use the public certificate of your choice:

cat <<EOF | oc apply -f -
kind: Route
name: kibana
namespace: elastic-monitoring
host: <your public hostname here>
  kind: Service
  name: kibana-kb-http
  weight: 100
  targetPort: https
  termination: reencrypt
  certificate: |
    <public certificate from your certification authority>
    -----END CERTIFICATE-----
  key: |
    <private key of the public certificate>
    -----END RSA PRIVATE KEY-----
  insecureEdgeTerminationPolicy: Redirect
wildcardPolicy: None

That's it — the connection from your browser to Kibana is now fully trusted and encrypted, not only from the browser to the OpenShift router but also inside the OpenShift cluster itself! You don't need to care about internal certificates rotation, as the private certificate is renewed automatically by the OpenShift Service CA Operator.

 Login as the elastic user and get the password with the following command:

  oc get secrets/elasticsearch-es-elastic-user --template='{{.data.elastic | base64decode }}'

Collect and store cluster metrics

OpenShift comes with a lot of helpful components to monitor your cluster. For example, kube-state-metrics is already deployed and a Prometheus instance is already installed. By leveraging solutions like index lifecycle management (ILM) or searchable snapshots, Elasticsearch can help you create a long-term storage solution for those metrics.

Furthermore, since Beats are supported, you can deploy Metricbeat to grab all those metrics, store them in Elasticsearch, and visualize them on pre-existing dashboards automatically created by Metricbeat.

Capture OpenShift cluster metrics with Metricbeat

Metricbeat can fetch metrics from various components. Let’s see how to configure Metricbeat to get metrics from your hosts and from several core components of your OpenShift cluster. Before we go any further we need to allow the Metricbeat pods to run in the privileged Security Context Constraints. This is required to get some system metrics:

oc adm policy add-scc-to-user -z metricbeat -n elastic-monitoring privileged

The whole configuration for OpenShift 4.6 is available here. If you want to try it, apply the manifest with the following command:

oc apply -f

After a few moments, you can see that the Metricbeat health is green:

$ oc get beats -n elastic-monitoring
NAME                                             HEALTH   AVAILABLE   EXPECTED   TYPE         VERSION        green    6           6          metricbeat   7.10.0

Besides the authorization objects (Roles and RoleBinding), let's have a closer look at this configuration to understand what happens behind the scenes. For example, let's see how Metricbeat collects metrics from the controller manager. The controller manager is an important component of the Kubernetes control plane. The controller manager runs core controllers like DaemonSet controller, StatefulSet controller, Kubernetes garbage collector, and more.

The controller manager is running as a Pod and exposes metrics that you may want to grab to monitor your cluster. To collect metrics from the controller manager, we use the following configuration:

      - type: kubernetes
        node: ${NODE_NAME}
          - condition:
                kubernetes.namespace: openshift-kube-controller-manager
              - module: kubernetes
                enabled: true
                  - controllermanager
                hosts: [ "https://${}:10257" ]
                bearer_token_file: /var/run/secrets/

In this code extract, Metricbeat discovers pods that run in the namespace openshift-kube-controller-manager and have the app label set to openshift-kube-controller-manager. These pods are running the controller manager. Also, Metricbeat gets authenticated with its ServiceAccount using a token file mounted in the Metricbeat pod itself.

Once deployed, go to Kibana to visualize the controller manager metrics in the dashboard "[Metricbeat Kubernetes] Controller Manager dashboard":

There are five Kubernetes dashboards in Kibana. This last one gives an overview of your cluster:

There is also a dashboard dedicated to Core DNS monitoring:

Collect OpenShift specific metrics with the Prometheus federation API

The Prometheus instance installed by default on OpenShift grabs some OpenShift-specific metrics. For example, you may want to collect the cluster operator metrics already collected by Prometheus. Using the Prometheus federation API is a great starting point because it helps collect these metrics without configuring a Metricbeat module for each new cluster operator.

cat <<EOF | oc apply -f -
kind: Beat
name: metricbeat-federate
namespace: elastic-monitoring
type: metricbeat
version: 7.10.0
  name: elasticsearch
      - module: prometheus
        hosts: ["https://prometheus-k8s.openshift-monitoring.svc:9091"]
        metrics_path: '/federate'
          'match[]': '{job=~"cluster-.*"}'
        # Use service account based authorization:
        bearer_token_file: /var/run/secrets/
          - /var/run/secrets/
      serviceAccountName: metricbeat
      automountServiceAccountToken: true
        - args: ["-e","-c","/etc/beat.yml"]
          name: metricbeat
            - name: NODE_NAME
                  fieldPath: spec.nodeName
        - emptyDir: {}
          name: beat-data

Those metrics are now safely stored in Elasticsearch and can be queried with Kibana:

Where to go next?

The certified Elastic Cloud on Kubernetes Operator is now available in your OpenShift web console — give it a try following the instructions from this blog post. We focused on Metricbeat, but additional Beats such as Auditbeat or Packetbeat can help you observe your OpenShift cluster even further.

To understand why Elasticsearch 7.10 is a great place to store your metrics, check out our blog post on saving space and money with improved storage efficiency in Elasticsearch 7.10. Also, with version 7.10 Elasticsearch allows you to search data stored on object stores like S3 (beta feature in 7.10), opening new possibilities for high-volume observability-related data. Find out more in our Elasticsearch searchable snapshots blog post.

About the author

Red Hatter since 2018, technology historian and founder of The Museum of Art and Digital Entertainment. Two decades of journalism mixed with technology expertise, storytelling and oodles of computing experience from inception to ewaste recycling. I have taught or had my work used in classes at USF, SFSU, AAU, UC Law Hastings and Harvard Law. 

I have worked with the EFF, Stanford, MIT, and to brief the US Copyright Office and change US copyright law. We won multiple exemptions to the DMCA, accepted and implemented by the Librarian of Congress. My writings have appeared in Wired, Bloomberg, Make Magazine, SD Times, The Austin American Statesman, The Atlanta Journal Constitution and many other outlets.

I have been written about by the Wall Street Journal, The Washington Post, Wired and The Atlantic. I have been called "The Gertrude Stein of Video Games," an honor I accept, as I live less than a mile from her childhood home in Oakland, CA. I was project lead on the first successful institutional preservation and rebooting of the first massively multiplayer game, Habitat, for the C64, from 1986: . I've consulted and collaborated with the NY MOMA, the Oakland Museum of California, Cisco, Semtech, Twilio, Game Developers Conference, NGNX, the Anti-Defamation League, the Library of Congress and the Oakland Public Library System on projects, contracts, and exhibitions.

Read full bio

Browse by channel

automation icon


The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon


The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon


The latest on the world’s leading enterprise Linux platform

application development icon


Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech