Subscribe to the feed

You’ve probably already heard this not-so-secret secret plenty of times before - Kubernetes secrets are not secrets! The values for the secrets are base64 encoded strings stored in etcd. What this means is that anyone who has access to your cluster, can easily decode your sensitive data. Anyone? Well, yes, just about anyone especially if RBAC is not set up correctly on your cluster. This would be anyone that can access APIs or has access to etcd. It could also be anyone who is authorized to create a pod or deployment in a namespace and then use that access to retrieve any secret in that namespace.  How can you ensure that secrets and other sensitive information on your clusters such as tokens, are not compromised? And by you, I mean everyone who is working on the clusters as well as off the clusters. In this blog post, we will discuss a holistic approach to encrypting your applications' crown jewels as you build, deploy and run applications on OpenShift.

About secrets on the cluster

Applications running on Kubernetes clusters can use Kubernetes Secrets so that they don't need to store sensitive data such as tokens or passwords within the application code.  

The diagram above, shows the typical workflow for secrets within the Openshift/Kubernetes cluster today. App developers using pipelines have git as the source of truth to manage configs being deployed to their cluster. Access control can help secure access to this repository but that in itself isn't always sufficient to ensure the sensitive information for the application is not compromised. During the deploy phase, Kubernetes Secret resources are created on the cluster by the API server, you can read more about the lifecycle here.  Secrets stored in etcd, can be used by the application pods in one of three ways: 1) as files in a volume mounted on one or more of its containers. 2) As a container environment variable. 3) By the kubelet when pulling images for the Pod.

In all three cases, the value in the secret is decoded before use.

So, now that we know how it works, why is it not enough to base64 encode the secret?

What is base64 encoding and why is it not enough?

Base64 encoding is a binary-to-text encoding scheme that represents 24-bit binary data as 6-bit base64 digits. It is used in transferring large amounts of data, especially large files such as image files, over a network. Its primary function is to provide data integrity as the data gets transferred over the network. Unfortunately, encoding is not encrypting.

Try this on any Linux terminal.  

``% echo -n 'not encrypted' | base64
bm90IGVuY3J5cHRlZA==
% echo -n 'bm90IGVuY3J5cHRlZA==' | base64 --decode
not encrypted

 

``As you can see, anyone with access to your system can easily decode your secrets either while they are being transferred to the cluster or while they are being used on the cluster.

Challenges

As a DevSecOps admin, you clearly have 2 challenges here:

  1. How to encrypt and manage sensitive data outside the cluster, that is, before it comes on the cluster in the build and deploy phases?
  2. How to secure your sensitive data while running applications inside the cluster?

Let’s look at how we address these challenges with OpenShift and the ecosystem of Open Source tools that work seamlessly with OpenShift.

Encrypting secrets before being deployed to the Cluster

As a Developer pushing code into the git repo, aka the ‘source of truth’ for your application, you can encrypt sensitive information used by your application before you push the code into git. Two common approaches to encrypting secrets before they are committed into the git repository and deployed on OpenShift clusters are described below:

Using Bitnami Sealed Secrets

  • Cluster admin deploys Sealed secrets controller on the Openshift cluster
  • Developers will need to install kubeseal CLI on their local machine.
  • Developer creates a secret resource which is then encrypted or sealed by kubeseal CLI by fetching the key from the controller at runtime. For network-restricted environments, the public key can also be stored locally and used by kubeseal.  Kubeseal will create a  SealedSecret custom resource.
  • Developer pushes this CR into their git repo
  • CD tools such as ArgoCD can be used to deploy the CR on the cluster.  
  • The Controller will detect the SealedSecret resource and decrypt it using the private key on the cluster.

Using KSOPS/Mozilla SOPS

If using Argo CD to deploy applications in OpenShift, you will need to use the Kustomize SOPS plugin which is used to decrypt resources encrypted with SOPS.  

On the Cluster, the  admin will:

  • Deploy GitOps operator
  • Generate Key using age
  • Create secret in the GitOps namespace that stores the public and private keys
  • Customize the Argo CD to use Kustomize SOPS plugin
  • Push the Public key into the git repository

Developers will:

  • Create Secret on their local console
  • Download the public key and encrypt the secret with SOPS CLI  
  • Generate KSOPS yaml with the encrypted secret and push to Git repository

ArgoCD will use KSOPS to decrypt the secrets file before it deploys the secrets on the cluster.

Both the approaches are useful to encrypt secrets using asymmetric cryptography. Both provide ways to decrypt the sensitive data before it is deployed as secrets on the cluster. Sealed secrets is natively integrated with Kubernetes. SOPS / KSOPS can work independently without needing a controller on the cluster. However, Sealed secrets uses strong crypto such as AES-256-GCM while SOPS uses gpg and age. SOPS provides integration with cloud provider KMSes while SealedSecrets currently does not but has plans to do so in future (see here).  SOPS only encrypts values on the secret and supports yaml, json, env var and binary value encryption because of which it can be used to encrypt helm charts for non-kubernetes deployments also.  

However, as you can see, once the secret data is on the cluster, it is decrypted before use. So essentially this solves only part of the problem. Next we need to see how we can secure this data on the cluster. Let’s look at the different options we have for encrypting data on the cluster.

Encrypting secrets on the Cluster

Default etcd encryption options in OpenShift

By default, etcd data is not encrypted in OpenShift Container Platform. You can enable etcd encryption for your cluster to provide an additional layer of data security.  When you enable etcd encryption, the following API server resources are encrypted:

  • Secrets
  • Config maps
  • Routes
  • OAuth access tokens
  • OAuth authorize tokens

Encryption keys are generated by the system and automatically rotated once a week. The keys themselves are actually pairs of <encryption-function> and <encryption-key>, where the encryption-function will be one of the supported encryption functions from upstream, e.g. identity, and the encryption-key is a corresponding base64 key string. OpenShift provides aescbc type which means that AES-CBC with PKCS#7 padding and a 32 byte key is used to perform the encryption. Recent OpenShift versions now support even stronger encryption with AES-256-GCM ciphers.

Interestingly, when you enable etcd encryption, the keys are created and stored locally on the api server host filesystem at  /etc/kubernetes/static-pod-resources/kube-apiserver-pod-<REVISION>/secrets/encryption-config/encryption-config. Hence, you must take care to backup the config separately from the rest of etcd content.  

KubeKMS integration in Hypershift

Red Hat introduced Hosted Control Plane on AWS as a means to provide separation of control plane and data plane functions that enable developers to focus on deploying applications while leaving control plane management with ops teams. Based on the HyperShift project, Hosted Control Plane is now available on Amazon Web Services (AWS) as a technology preview and will be extended to support on Azure, baremetal and KubeVirt in future. Hosted Control Planes supports both direct aescbc and kubekms integrations for encryption.

Upstream Kubernetes strongly recommends using Kube KMS. This is because the Kubernetes KMS encryption provider uses an envelope encryption scheme to encrypt data which is a strong encryption option. The data is encrypted using a data encryption key (DEK) using AES-CBC with PKCS#7 padding prior to v1.25 and AES-GCM starting from v1.25. A new DEK is generated for each encryption. The DEKs are encrypted with a key encryption key (KEK) that is stored and managed in a remote KMS. The KMS provider uses gRPC to communicate with a specific KMS plugin. The KMS plugin, which is implemented as a gRPC server and deployed on the same host(s) as the Kubernetes master(s), is responsible for all communication with the remote KMS. Figure below shows the encryption process workflow.

image5-Sep-05-2023-05-45-10-4920-PM

Image source: https://kubernetes.io/blog/2022/09/09/kms-v2-improvements/ 

Network Bound Disk Encryption

Alternatively you can encrypt the disks where the etcd datastore is deployed. Key Benefits of local disk encryption are:

  • Provides encryption for local storage
  • Addresses disk/image theft, which is important for edge cluster deployments
  • Platform/cloud agnostic implementation
  • TPM/vTPM (v2) and Tang endpoints for automatic decryption

Network Bound Disk Encryption (NDBE) is available with Red Hat Core OS. This provides a platform agnostic means for automating the decryption of local storage. NBDE relies on Linux Unified Key Setup for On Disk Format (LUKS) for encryption. The primary benefit here is to limit the risk around disk or image theft. Clevis can be defined as a pluggable framework for automated decryption that can be used to provide decryption of data or unlocking of LUKS volumes, allowing decryption of disks without having to prompt a passphrase. Clevis provides client side components and Tang provides a stateless, lightweight server, where encryption/decryption of the data works via HTTP (unencrypted) and HTTPS.  Since OCP 4.3, we support using local and network based Clevis pins. Local pins use TPM/vTPM and network pins use Tang server. Nodes using this feature will only boot properly when the correct TPM or Tang endpoint is available.

Workflow with Clevis and Tang:

  • Tang runs a simple web-based service that advertises a public signing key that is used by clients to generate key pairs which are used to encrypt data.
  • The Clevis client generates a strong cryptographic key pair, using the signing key that is provided by the Tang server, to perform an encryption. Encryption is performed by using the generated private key, which is discarded after encryption is complete, thereby protecting the data until the private key is reconstituted.
  • The Clevis client uses an ephemeral key to obtain the information that is required from the Tang server to reconstitute the private key so that it can decrypt the data. This process is known as the McCallum-Relyea exchange

LUKS uses aes-xts-plain64 cipher for encryption by default. The default key size for LUKS is 512 bits. The default key size for LUKS with Anaconda (XTS mode) is 512 bits. Ciphers that are available are:

  • AES - Advanced Encryption Standard
  • Twofish (a 128-bit block cipher)
  • Serpent

Cloud provider disk encryption options

Using etcd encryption in OpenShift along with cloud provider node level encryption with EBS can provide an extra layer of encryption especially for the etcd key that is stored locally on the api server.

When you host OpenShift clusters on AWS, Amazon EBS encryption can be enabled to provide encryption  of EC2 instances - the cloud storage the RHCOS disks use. Amazon EBS encryption uses AWS KMS keys when creating encrypted volumes and snapshots. It uses AES-256-XTS for block cipher encryption.   When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted:

  • Data at rest inside the volume
  • All data moving between the volume and the instance
  • All snapshots created from the volume
  • All volumes created from those snapshots

Similarly Azure provides encryption option for Azure Managed Disks that is connected to Azure Key Vault and Google provides encryption option for Google Cloud Storage. Both use AES 256 keys by default but also can use customer managed and provided keys with integration to KMS.

Data in motion encryption on the cluster: TLS Everywhere

We have visited a lot around data at rest encryption. But secrets need to be secured throughout the build, deploy and run lifecycle of applications both on and off cluster.

So, key considerations here should be:

  • All Ingress, Egress traffic should be encrypted at a cluster level as well as application level
  • Traffic between control plane and worker nodes should be encrypted
  • Certificate management should be  automated  

OpenShift platform provides configuration for TLS and automated certificate management for control plane components (see here). For workloads,  We recently added support for cert-manager Operator for Red Hat OpenShift which helps manage certificates that can be used by the workloads for TLS connections, see here for more details.  Also, OpenShift Service Mesh can be used to encrypt pod to pod traffic.

Secrets using third-party secret store integrations

A key reason to choose a 3rd party secret store would be to ensure that the lifecycle of secrets is managed outside the cluster with a secret-storage solution that is centralized. These secret stores offer authentication and authorization policies and procedures that are different from those on the cluster and maybe better suited to controlling application data access. Most of these solutions also offer envelope encryption and HSM support that is a requirement by regulatory authorities. Popular solutions include HashiCorp Vault, CyberArk Conjur, AWS Secret Store, Azure Key Vault, Google Secret Manager, 1Password and others.  

Sidecar solutions

Solutions such as Vault and CyberArk can be used to inject secrets specific to the application pod. In both cases, the sidecar/init containers are responsible for authentication to the Secret provider and the application can then use the secret returned whenever necessary. The connection to the provider is over TLS ensuring security as secrets are retrieved. Vault provides additional security by using response wrapping which allows you to pass credentials around without any of the intermediaries having to actually see the credentials. Customers who have chosen these solutions can decide to store the secrets on or off the clusters. Typically, if customers have been using Vault or CyberArk for their infrastructure and other application needs, they will lean toward integrating with these solutions for a seamless secrets management experience on OpenShift.

Secret Store CSI (SSCSI) driver and provider solution

The Secrets Store CSI Driver allows mounting of secrets and other sensitive information into application pods as a volume. The Secrets Store CSI driver communicates with a provider using gRPC to retrieve the secret content from the external Secrets Store specified in the SecretProviderClass custom resource. Once the volume is attached, the data in it is mounted into the container’s file system. Unlike the sidecar solutions above that bring in secrets from specific providers, the  SSCSI driver can be configured to retrieve secrets from multiple different secret providers. More information on how the driver and provider work can be found here.

SSCSI is primarily chosen by customers that don’t want secrets stored in etcd as Kubernetes secrets because:

  • They may have stringent compliance requirements that make it necessary to store secrets and manage them ONLY in a central store vs on clusters.
  • They are possibly bringing in workloads in an environment where the control plane is not managed by them, so they want full control of their workload secrets and don't trust the Platform admin to do so. For e.g this can be customers that bring in workloads where they are a tenant on managed service provider clusters, or they bring in their workload into cloud platforms where the control plane is not managed by them.

SSCSI driver does not directly provide ways to secure non-volume mount secrets, such as those that are needed as environmental variables or image pull secrets or those that you may create directly on the cluster for managing ingress certificates. However, you can use the sync secrets functionality that can create Kubernetes secrets and then provide support for secrets as Env variables.  

External Secrets Operator (ESO)

External Secrets Operator (ESO)  is a user-friendly solution used to synchronize secrets from external secret management solutions  into Kubernetes Secrets.  ESO runs in the  Kubernetes/OpenShift cluster as a deployment resource and utilizes CustomResourceDefinitions(CRDs)  to configure access to secret providers through SecretStore resources and manages Kubernetes secret resources with ExternalSecret resources.

ESO is chosen by customers when:

  • They need ease of integration with platform and ease of use for developers
  • They have high degree of trust in the control plane of the cluster - especially in how the etcd is configured with encryption or in how RBAC is managed on the cluster
  • They have multi-cluster hub<>spoke use cases for secrets management and need cross-cluster secrets integrations
  • They need Platform Secrets managed for non-application usage, for e.g for Ingress, automation, image pull secrets
  • The secrets need to be modified on cluster with templating for specific applications
  • And lastly and importantly, their  use case needs secrets on the cluster  

What gets stored on the cluster?

A simple comparison between these services shows what each of these stores on the cluster once these applications are deployed and run on the cluster. It’s important to note here that all of the solutions above  have some sensitive data stored on the cluster. Hence, data at rest encryption best practices and RBAC are extremely important along with these solutions to provide end to end security.

image2-Sep-05-2023-05-45-10-3606-PM

Integrating applications with HSMs

Hardware Security Modules (HSMs) are typically hardened and tamper resistant hardware devices that safeguards secrets and performs encryption and decryption as well as other cryptographic functions. HSMs provide a root of trust to protect master keys that encrypt credentials and secrets. While they are commonly used in traditional applications, HSM deployment may be a bit complex in the cloud native environments. OpenShift provides ways to integrate with HSMs so applications can use HSMs for encrypting secrets directly on the cluster. Solutions such as nCipher nShield or Luna HSM provide agent integrations with HSM backends.

In the Build phase, nCipher provides nShield libraries that can be used, along with Red Hat Universal Base images, to create application images that will connect with HSM servers. In the deploy phase, nShield hardserver containers are deployed alongside application containers in the pod. At runtime, The server connects  to one or more nShield HSMs on prem or in the cloud and manages encryption and decryption of application secrets.

image1-Sep-05-2023-05-45-10-7587-PM

Image: https://cloud.redhat.com/blog/self-contained-ready-and-secured-enhancing-red-hat-openshift-with-hardware-cryptography 

HSM are also incorporated into the OpenShift Platform via OpenShift Data Foundation and used for Persistent Volume (PV) encryption with CipherTrust Manager as described here.

Review and Recap  

Now that you are aware of the various options Red Hat OpenShift provides as well as how each works to protect your sensitive data, use a holistic approach to analyze both the platform and application dependencies to make an informed choice based on your use case.  In our upcoming release, we plan to enable solutions for external secret storage provider integrations to help resolve some of your concerns around secrets management on the cluster. Reach out to us if you have any questions. We welcome any feedback you have for the blog or if you want to share use cases and tools you prefer using for secrets management.  


About the author

UI_Icon-Red_Hat-Close-A-Black-RGB

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech