Red Hat’s OpenShift Container Platform (OCP) is a Kubernetes platform for operationalizing container workloads remotely or as a hosted service. OpenShift enables consistent security, built-in monitoring, centralized policy management, and compatibility with Kubernetes workloads. The rapid adoption of open source projects can introduce vulnerabilities in standard Kubernetes Environments. OCP supports these projects internally, allowing users to gain open source advantages with a managed product’s stability and security. OpenShift offerings include five managed and two hosted options.
This blog post is part one of a four-part OpenShift security blog series that will focus on the Red Hat OpenShift Container Platform (RHCOP) version 4.5, which is designed to be self-managed within your infrastructure environment due to various deployment options.
OpenShift Architecture
OpenShift is built on top of Kubernetes, and while Kubernetes provides container orchestration capabilities, pod resiliency, services definitions, and deployment constructs, there are many other components required to make it work. For example, Kubernetes does not provide default Container Network Interface (CNI) or default monitoring implementations. It is up to the cluster administrator to bring additional tools to operate and manage the Kubernetes cluster and any applications running. For security teams, this presents new challenges - as an example, these teams need to create new policies and vet images, configurations, and account access for any new applications that will be deployed into the cluster.
These additional, necessary operational capabilities are provided out of the box with OCP and are pluggable so that administrators can customize components and services to meet their infrastructure needs.
OCP’s architecture requires three different types of nodes within each cluster to ensure highly available deployments.
Control Plane Nodes
These nodes run the core Kubernetes control plane functions and provide additional services such as a self-service web console and developer- and operations-focused dashboards.
In most cloud environments, the control plane nodes are hidden from end-users and managed by providers for high availability, regular upgrades, and added security updates. With OCP, administrators manage, view, and interact with the control plane nodes directly, which means that they will need to set up their clusters for high availability and adequate security. To be compliant with industry-standard best practices, a minimum of three control-plane nodes should be configured to allow for accessibility to the control plane in a node outage event.
Infrastructure Nodes
These are nodes dedicated to hosting additional functionality such as OpenShift Routes and the OpenShift internal registry. Infrastructure nodes host administrator and network-focused services that are managed separately from your containerized applications.
App Nodes or Nodes
These are the OCP nodes used to run your containerized applications. These are similar to Kubernetes worker nodes and run various monitoring and networking services required across a cluster.
Cloud IAM, Accounts and Limits
When using a cloud provider, you will want to enforce tight control of individual clusters and other cloud resources sharing a project. Limit access to resources by applying the principle of least privilege. Understand each provider’s account roles and limitations before setting up access to any OCP clusters.
OCP helps this process by providing in-depth documentation on the installation process, including installation in AWS, Azure, GCP, and IBM Z.
Private Clusters
Strict network isolation, which prevents unauthorized external ingress to OpenShift cluster API endpoints, nodes, or pod containers, comprises a critical piece of cluster security. By default, OpenShift clusters have Kubernetes cluster API endpoints and nodes with public IP addresses. By default, the OpenShift Container Platform is provisioned using publicly-accessible DNS and endpoints. The DNS, Ingress Controller, and API server can be set to private after installing the cluster. Additionally, OpenShift may expose operations-focused dashboards for the admins and developers. Ideally, these dashboards will be running on infrastructure nodes away from your high-priority workloads.
The private cluster options vary based on the infrastructure environment. However, there are in-depth guides for setting up a private cluster through various providers. OpenShift outlines the installation methods and network setup options that are currently supported here.
After creating your private cluster, you may need to perform extra configuration steps to ensure your cluster’s components are correctly set. Also, upgrades to the cluster may require Internet access and extra considerations.
Setting up a Bastion Host
A bastion host provides access to a private network from an external network and is a simple way to add an extra layer of security to your OpenShift cluster. A bastion host minimizes the chances of unauthorized access to your OCP cluster by allowing for more tightly tuned access. Benefits of a bastion host include:
- Separate login accounts for everyone accessing the bastion host
- Auditing of user access and time
- Specific node access
A bastion host is a useful way to augment security to your cluster. Restricting access to only specific nodes through the cluster using the bastions .ssh/config allows for private network access and can restrict users from tampering with nodes deemed off-limits.
Note: When using a cloud provider for deployment, utilize software-defined networks that are available. The proper implementation of cloud IAM accounts, firewall rules, and private networking will significantly reduce the attack surface.
VPC Networks
When deploying your OpenShift cluster, you will want to take advantage of the various cloud providers’ built-in networking and security protections. This will vary depending on the environment; however, there are defaults and best practices to keep in mind during setup.
- Create a single VPC network for each cluster and allow access accordingly.
- Setup firewall rules to allow for only the ports required. These include:
Securing etcd
By default, data stored in etcd is not encrypted at rest in the OpenShift Container Platform. Etcd encryption can be enabled in the cluster to effectively provide an additional layer of data security and canto debug in your cluster to help protect the loss of sensitive data if an etcd backup is exposed to incorrect parties. Since OpenShift recommends an etcd backup during any upgrade, encrypting etcd should be a standard practice in your organzation.
When you enable etcd encryption, the following server resources are encrypted:
- Secrets
- ConfigMaps
- Routes
- OAuth access tokens
- OAuth authorized tokens
When etcd encryption is enabled, encryption keys are created. These keys are rotated every week, and the admin must have these keys to restore from an etcd backup.
Node Images
Compromised nodes create a danger to your entire cluster and its workloads. Using minimal base operating system (OS) images and configuring read-only file systems provides two critical ways to protect your nodes against many attacks and limit their potential blast radius. With minimal images, attackers have limited tools to leverage, and if they cannot write or overwrite configuration files and binaries on the node’s root file system, they cannot hijack the system as easily nor install their malicious tools.
Providers are increasingly making available minimal, container-optimized OS images such as AWS Bottlerocket and GCP’s Container-Optimized OS (COS). However, It is best to leverage OpenShift’s relationship with the cloud providers and use the most recent Red Hat Enterprise Linux CoreOS (RHCOS) for all of your OCP cluster’s nodes. RHCOS is the default operating system for all cluster machines; however, you can create worker machines that use RHEL as their operating system.
RHCOS is designed to be as immutable as possible, allowing for only a few system settings to be changed. These settings are configured remotely, with the help of a specific operator developed by OpenShift. This scenario means no user will need to access a node directly, and any changes to the node will need to be directly authorized through the use of the Red Hat Machine Operator.
CRI-O
RHCOS also leverages CRI-O as its default container runtime. CRI-O focuses only on features needed by Kubernetes platforms. It also provides a smaller footprint and reduced attack surface than is possible with container engines that include a superset of functionality beyond Kubernetes-centric features. Since OCP is based on Kubernetes, it benefits from these features as well. By not including extra features for direct command-line use or other orchestration facilities, CRI-O’s footprint is smaller, and therefore potential vulnerabilities are reduced.
저자 소개
Michael Foster is a CNCF Ambassador, the Community Lead for the open source StackRox project, and Principal Product Marketing Manager for Red Hat based in Toronto. In addition to his open source project responsibilities, he utilizes his applied Kubernetes and container experience with Red Hat Advanced Cluster Security to help organizations secure their Kubernetes environments. With StackRox, Michael hopes organizations can leverage the open source project in their Kubernetes environments and join the open source community through stackrox.io. Outside of work, Michael enjoys staying active, skiing, and tinkering with his various mechanical projects at home. He holds a B.S. in Chemical Engineering from Northeastern University and CKAD, CKA, and CKS certifications.
유사한 검색 결과
What’s new in post-quantum cryptography in RHEL 10.1
Introducing OpenShift Service Mesh 3.2 with Istio’s ambient mode
Data Security And AI | Compiler
Data Security 101 | Compiler
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
가상화
온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래