This blog was written by Mayur Shetty, Principal Solution Architect (Red Hat) and Yury Kylulin, Technical Solutions Specialist - TSS (Intel)

Hybrid and multicloud infrastructures are becoming a major part of data center deployments, whether you are talking about databases, AI, machine-learning, or telecommunications workloads. By combining these two technologies, today’s cloud-native infrastructures benefit from a hybrid multicloud approach, with some workloads running in private clouds, some running in public clouds, and others running on-premise. Distributing workloads is especially important for edge applications that need to run on-premise and for customers who want to control where they store applications and sensitive data.

By responding to customer interest in multicloud and promoting the value of Intel hardware plus Red Hat OpenShift as the backbone of an effective hybrid multicloud architecture, we show customers how they benefit from this technology’s flexibility. To this end, Red Hat has made substantial progress in enabling OpenShift service offerings across multiple cloud providers as well as working on-premises, supporting bare-metal or virtual infrastructure platforms from VMware and OpenStack.

Red Hat’s hybrid multicloud architecture is powered by Intel technologies that are highly optimized for third-party workloads. For edge and private cloud solutions, the architecture offers the capability to adopt specific compute, network, memory, and accelerator resource needs. For public cloud deployments at scale, the Intel Xeon Scalable platform offers a compelling price performance on industry standard infrastructure and is available in multiple configurations at different cloud service providers while offering choice and flexibility in your scaling needs.

Red Hat and Intel have a long history of collaboration, and together we drive open source innovation to accelerate digital transformation in the industry. With Intel providing the infrastructure hardware and Red Hat providing the infrastructure software, together we enable  solutions to transition workloads from VMs to containers, build hybrid or multicloud infrastructures, and leverage the power of microservices and Kubernetes. By deploying Red Hat OpenShift and Intel Xeon Scalable platform across all cloud instances, we can offer a true hybrid multicloud solution. A solution that provides flexibility and application portability across many different footprints.

Key to this hybrid multicloud technology is Red Hat Advanced Cluster Management (RHACM) for Kubernetes. RHACM manages a complete Kubernetes infrastructure from a single window and can be deployed across different footprints. RHACM extends the value of Red Hat OpenShift by deploying apps, managing multiple clusters, and enforcing policies across multiple clusters at scale.

For this exercise, we have installed RHACM on the Red Hat OpenShift Service on an AWS (ROSA) cluster (in our demo, the AWS Availability zone used was in eu-central-1, Frankfurt, Germany). RHACM is deployed in a hub-and-spoke model, with the AWS cluster acting as the hub and used to manage the spoke OpenShift clusters elsewhere.

The ROSA Cluster

First, we must install the ROSA cluster. To install the ROSA cluster, follow the instructions listed in the Overview of ROSA deployment workflow section.

After the installation of the ROSA cluster, we can see on the AWS console that 7x EC2 instances have been created as part of the cluster:

  • 3 masters: m5.xlarge
  • 2 infra nodes: r5.xlarge
  • 2 worker nodes: m5.xlarge

The following screen capture shows the instances created on the AWS console:

The screens here give the details of the instances that were used for the Master, Infra, and Worker nodes (

Instances are based on Intel Xeon Scalable Processors (first and second generations). There is an option to select different Intel-based instance types depending on the needs.

The version of the OpenShift used on ROSA is 4.8.19, and on RHACM, it is 2.3.3.

Once logged into the RHACM console, you will see the ROSA cluster (local-cluster):

The Bare-Metal Cluster

The first spoke cluster for this exercise is a bare-metal OpenShift cluster deployed in an Intel data center in Russia. The installation of the cluster is done according to the OpenShift docs.

The bare-metal cluster is running OpenShift version 4.8.19, and the hardware configuration used for the nodes is based on third generation Intel Xeon Scalable processors.

The Azure Cluster

The second spoke cluster is an Microsoft Azure Red Hat OpenShift (ARO) cluster (in our demo, the ARO server was in West Europe). The installation of the cluster is done according to the OpenShift docs. Azure account configuration details are found here.

ARO cluster installation instructions can be found here.


After the installation of the ARO cluster, we can see on the Azure console that 6x VM instances have been created as part of the cluster:

  • 3 masters: Standard D8s v3
  • 3 worker nodes: Standard D4s v3

Dsv3-series sizes run on Intel Xeon Scalable (first and second generation) or Intel Xeon E5 v3/v4 processors. Also, there is an option to select different Intel-based instance types depending on the needs:

The screen capture below shows the instances created by ARO on the Azure console:

Managing the Clusters with RHACM

RHACM can be used to manage the lifecycle of an OpenShift cluster; that is, we can deploy an OpenShift cluster, pull in an existing cluster into RHACM, and delete the clusters.

For this demo, we pulled in a previously deployed OpenShift cluster on bare metal and an ARO cluster. To do this, click on the “Import cluster” button:

This takes us to the Importing an existing cluster page:

From the Import screen, copy the command and run it on the cluster being imported into RHACM.

After adding the two spoke clusters (bare metal and ARO), the cluster management console will look like this:

The Workload

As part of this exercise, we want to demonstrate the process of an application migration across the on-premise cluster and the cloud OpenShift clusters managed by RHACM. For testing purposes, we use an application that discovers different infrastructure platforms and environment technologies that can be used to accelerate data processing for different kinds of use cases (cloud technology discovery application). It could be used as a starting point to understand currently available technologies across different clouds.

An application consists of a deployment that runs one test pod in the cluster. Once you create an application, it appears in the Applications section of RHACM. Application placement rule is based on the variable called “environment.” Depending on the settings, it can be “aws” for the ROSA cluster, “azure” for the ARO cluster, and “baremetal” for the on-premise bare-metal cluster.

This screen capture shows an application running on the ROSA cluster:

Because RHACM is installed on ROSA, in the UI you can see the selected cluster as a local cluster.

Let’s move the application from ROSA to ARO as an example. To accomplish this, we will use Editor and change the label environment from “aws” to “azure”:

Once we save the application, it will be terminated on the ROSA cluster and scheduled on the ARO cluster:


The process for moving the application to the on-premise cluster is the same except the value is set to “baremetal.”


The ability to easily shift between on-premise and multiple cloud providers offers:


  • Application portability
  • Capability and agility to optimize deployments across any footprint


  • Pay-as-you-go model
  • Automation of application deployment
  • A best-of-both]-worlds model that leverages the scale of cloud with the capabilities of on-premise

Strategic value 

  • Write application once and deploy anywhere
  • Low bar of entry for new technology adoption

In this article, we have discussed how to migrate an application between an on-prem bare-metal OpenShift cluster and Managed OpenShift clusters on ROSA and ARO. In the next blog post, we will walk you through how we can migrate data between the OpenShift clusters in this multicluster architecture and explore how to manage data storage in a hybrid multicloud architecture.

For a video demo of the material we’ve discussed in this blog, click here.

About the author

Mayur Shetty is a Principal Solution Architect with Red Hat’s Global Partners and Alliances (GPA) organization,  working closely with cloud and system partners. He has been with Red Hat for more than five years and was part of the OpenStack Tiger Team.

Read full bio