Many companies choose Red Hat OpenShift as the common platform to develop and run all their applications. By doing so, they avoid a heterogeneous environment that can create a lot of complexity. Not only do they build and run new cloud-native applications on Red Hat OpenShift, but they can also migrate their legacy ones to it.

One of the main advantages of using OpenShift is that developers only need to learn one interface while the underlying details of the platform are abstracted away. This can result in significant productivity increases.

Red Hat OpenShift Service on AWS (ROSA)

Some of the customers who decide to adopt OpenShift want to take a further simplification step. They prefer not to worry about providing the infrastructure for their clusters and managing them. They want their teams to be productive from day one and focus only on developing applications. An option for them is Red Hat OpenShift Service on AWS (ROSA).

ROSA is hosted completely on the Amazon Web Services (AWS) public cloud. It is maintained jointly by Red Hat and AWS, which means that the control plane and compute nodes are fully managed by a Red Hat team of Site Reliability Engineers (SREs) with joint Red Hat and Amazon support. This covers installation, management, maintenance and upgrades on all the nodes.

Deployment options for ROSA

There are two main ways to deploy ROSA: in a public cluster and in a private link one. In both cases we recommend deploying them in multiple availability zones for resiliency and high-availability. 

Public clusters are mostly used for workloads without stringent security requirements. The cluster will be deployed in a Virtual Private Cloud (VPC) inside a private subnet (which will contain the control plane nodes, the infrastructure nodes and the worker nodes where the applications run). It will still be accessible from the internet however—so a public subnet is needed in addition to the VPC. 

AWS load balancers (Elastic and Network Load Balancers) deployed on this public subnet allow both the SRE team and the users accessing the applications (which is to say ingress traffic to the cluster) to connect. In the case of the users, a load balancer will redirect their traffic to the router service running on the infrastructure nodes, and from there it will be forwarded to the desired application running on one of the worker nodes. The SRE team will use a dedicated AWS account to connect to the control and the infrastructure nodes through different load balancers. 

Figure 1. ROSA public cluster

Figure 1. ROSA public cluster

For production workloads with more stringent security needs, we recommend deploying a PrivateLink cluster. In this case, the VPC within which the cluster resides has only a private subnet, meaning that it cannot be accessed at all from the public Internet. 

The SRE team uses a dedicated AWS account that connects to an AWS Load Balancer via an AWS PrivateLink endpoint. The load balancer redirects the traffic to the control or infrastructure nodes as needed. (Once the AWS PrivateLink is created, the customer needs to approve the access from the SRE team’s AWS account.) The users connect to an AWS Load Balancer which redirects them to the router service on the infrastructure nodes. From there they are sent to the worker node where the application they want to access is running.

In PrivateLink cluster implementations it is common for customers to want to redirect the egress traffic of the cluster to their on-premise infrastructure or to other VPCs in the AWS cloud. To do so they can use an AWS Transit Gateway or AWS Direct Connect so that there is no need to deploy a public subnet in the VPC where the cluster resides. Even if they need to direct egress traffic to the internet they can connect (via the AWS Transit Gateway) to a VPC that has a public subnet with an AWS NAT Gateway and an AWS Internet Gateway.

Figure 2. ROSA private cluster with PrivateLink

Figure 2. ROSA private cluster with PrivateLink

In both public and PrivateLink implementations, the cluster can interact with other AWS services by using AWS VPC endpoints to communicate with the VPC where the cluster is with the desired services.

Connecting to the cluster

The recommended way for the SRE team to log on to the ROSA clusters and carry out administration tasks is to use the AWS Security Token Service (STS). One should apply the concept of least privilege so that only the roles that are strictly necessary to accomplish a task are assigned. The token is temporary and single use, so if a similar task needs to be done again after it has expired, a new token has to be requested.

The use of STS is extended to the connection of the ROSA cluster to other AWS services such as EC2 (for example if new servers need to be spun up for the cluster) or EBS (when persistent storage is needed).

Summary

Adopting DevOps methodologies and modernizing the way that applications are deployed using an enterprise Kubernetes platform like OpenShift is applicable to all types of customers. They can choose to host it on-premise and manage it themselves, but, if they prefer not to, one option is ROSA. The large number of AWS services that can interact with ROSA clusters helps customers make the most of their platform.

Learn more


About the author

Ricardo Garcia Cavero joined Red Hat in October 2019 as a Senior Architect focused on SAP. In this role, he developed solutions with Red Hat's portfolio to help customers in their SAP journey. Cavero now works for as a Principal Portfolio Architect for the Portfolio Architecture team. 

Read full bio