Azure Red Hat OpenShift (ARO) HCP

Azure Red Hat OpenShift (ARO) with Hosted Control Planes support and external authentication integration is currently under active development. Steps and commands may change as the feature moves toward General Availability.

This section describes the steps for deploying and integrating an Azure Red Hat OpenShift Hosted Control Planes (ARO HCP) environment with an external authentication provider. The reference implementation uses Red Hat Build of Keycloak as the identity provider.

Unlike other OpenShift environments, ARO HCP clusters require the use of external authentication — there is no built-in OAuth server. Access is only possible using an external authentication provider.

At a high level, the following steps are involved:

  1. Set up the required environment variables.

  2. Deploy an ARO HCP cluster.

  3. Create an external authentication provider using a Bicep template.

  4. Create an administrative credential to access the cluster.

  5. Create a secret for the OpenShift Console OIDC client.

  6. Assign OpenShift RBAC policies to external identities.

  7. Authenticate using credentials from the external authentication provider.

Prerequisites

Assumptions

This guide does not cover installation of ARO HCP or Red Hat Build of Keycloak. It assumes the user has the appropriate permissions to complete the described tasks.

Consult the following documentation for additional information:

Required tools

The following utilities must be installed and configured on your local machine:

  • az — Azure Command Line Interface

  • oc — OpenShift Command Line Interface

  • jq — JSON command-line parser

External authentication provider

An external authentication provider must be available and accessible from the ARO cluster before beginning. If you do not already have one configured, see Red Hat Build of Keycloak Configuration for the reference implementation setup steps.

Environment variables

Set the following environment variables before proceeding. These values are reused throughout this section:

export ARO_CLUSTER_NAME=<ARO_CLUSTER_NAME>
export ARO_SUBSCRIPTION_ID=$(az account show --query id --output tsv)
export ARO_RESOURCE_GROUP=<ARO_RESOURCE_GROUP>
export FRONTEND_HOST=$(az cloud show --query endpoints.resourceManager --output tsv)

Set the OpenShift API URL:

export OPENSHIFT_API_URL=$(az rest --method GET \
  --uri "/subscriptions/${ARO_SUBSCRIPTION_ID}/resourceGroups/${ARO_RESOURCE_GROUP}/providers/Microsoft.RedHatOpenShift/hcpOpenShiftClusters/${ARO_CLUSTER_NAME}?api-version=2024-06-10-preview" \
  | jq -r '.properties.apiServer.url')

Cluster deployment

Consult the official ARO documentation for the steps to deploy an ARO HCP cluster.

Adding an external authentication provider

The az CLI and a Bicep template are used to add an external authentication provider to an existing ARO HCP cluster.

Requirements

  • The external authentication provider must be network-accessible from the ARO cluster.

  • One or more OAuth clients must be configured in the provider:

OpenShift Component Required Redirect URI

OpenShift API

Implementation-specific

OpenShift Web Console

https://<console_host>/auth/callback

Creating the Bicep template

Create the file externalauth.bicep with the following content:

cat <<EOF > externalauth.bicep
@description('The name of the external auth provider configuration')
param externalAuthName string

@description('The issuer url')
param issuerURL string

@description('The audiences for the issuer')
param issuerAudiences array = []

@description('The client ID for the OpenShift CLI')
param cliClientID string

@description('The client ID for the OpenShift Console')
param consoleClientID string

@description('Name of the ARO cluster')
param clusterName string

@description('Name of the claim associated with the username')
param usernameClaim string = 'email'

@description('Name of the claim associated with the groups')
param groupsClaim string = 'groups'

@description('Extra scopes to request during authentication')
param extraScopes array = []

@description('Username prefix policy')
param usernamePrefixPolicy string = 'NoPrefix'

resource hcp 'Microsoft.RedHatOpenShift/hcpOpenShiftClusters@2024-06-10-preview' existing = {
  name: clusterName
}

resource externalauth 'Microsoft.RedHatOpenShift/hcpOpenShiftClusters/externalAuths@2024-06-10-preview' = {
  parent: hcp
  name: externalAuthName
  properties: {
    claim: {
      mappings: {
        username: {
          claim: usernameClaim
          prefixPolicy: usernamePrefixPolicy
        }
        groups: {
          claim: groupsClaim
        }
      }
    }
    clients: [
      {
        clientId: consoleClientID
        component: {
          name: 'console'
          authClientNamespace: 'openshift-console'
        }
        type: 'Confidential'
        extraScopes: extraScopes
      }
      {
        clientId: cliClientID
        component: {
          name: 'cli'
          authClientNamespace: 'openshift-console'
        }
        type: 'Public'
        extraScopes: extraScopes
      }
    ]
    issuer: {
      url: issuerURL
      audiences: issuerAudiences
    }
  }
}
EOF

Bicep template parameters

The Bicep template includes several parameters that can be provided during deployment and are described in the following table:

Parameter Description

externalAuthName

Name to assign to the external authentication provider configuration

issuerURL

URL of the external authentication provider (issuer URL of the RHBK realm)

issuerAudiences

List of valid OIDC token audience values

usernameClaim

Claim in the OIDC token that contains the username

groupsClaim

Claim in the OIDC token that contains group membership

cliClientID

OAuth Client ID for the OpenShift CLI

consoleClientID

OAuth Client ID for the OpenShift Web Console

usernamePrefixPolicy

Prefix policy to apply to mapped usernames

extraScopes

Additional OIDC scopes to request during authentication

The first value in issuerAudiences must be the OIDC Client ID for the OpenShift Web Console.

Locating RHBK values

The following steps can be taken to determine the correct values from RHBK to provide as parameters during deployment.

Issuer URL

  1. In the openshift realm, select Realm settings in the left navigation bar.

  2. Next to Endpoints, select the OpenID Endpoint Configuration link.

  3. The issuer field is the first property in the OIDC Discovery Document.

Username claim

RHBK uses the preferred_username claim for the username in the OIDC token.

Deploying with RHBK

Set the RHBK host URL:

export RHBK_HOST=<rhbk_host>

Add the external authentication provider to the ARO HCP cluster:

az deployment group create \
  --name aro-hcp-auth \
  --subscription "$ARO_SUBSCRIPTION_ID" \
  --resource-group "$ARO_RESOURCE_GROUP" \
  --template-file externalauth.bicep \
  --parameters \
    externalAuthName="aro-hcp-auth" \
    issuerURL="$RHBK_HOST/realms/openshift" \
    issuerAudiences='("openshift-console", "openshift-cli")' \
    usernameClaim="preferred_username" \
    cliClientID="openshift-cli" \
    consoleClientID="openshift-console" \
    clusterName="$ARO_CLUSTER_NAME" \
    extraScopes='("profile")'
After the external authentication provider is created, it may take several minutes to become active on the cluster.

Administrative credential

Because ARO HCP clusters have no built-in administrator account, an administrative credential (kubeconfig) with cluster-admin privileges must be generated for bootstrap tasks. These credentials are valid for a maximum of 24 hours.

Requesting the credential

Create the script request-aro-admin-credential.sh:

cat << 'EOF' > request-aro-admin-credential.sh
#!/bin/bash

# Initialize variables
FRONTEND_HOST="${FRONTEND_HOST:-$(az cloud show --query endpoints.resourceManager --output tsv)}"
FRONTEND_API_VERSION="${FRONTEND_API_VERSION:-2024-06-10-preview}"
SUBSCRIPTION_ID="${ARO_SUBSCRIPTION_ID:-$(az account show --query id --output tsv)}"
RESOURCE_GROUP="${ARO_RESOURCE_GROUP}"
CLUSTER_NAME="${ARO_CLUSTER_NAME}"

header() {
    echo "${1}: ${2}"
}

authorization_header() {
    if [ -z "${ACCESS_TOKEN:-}" ]; then
        ACCESS_TOKEN=$(az account get-access-token --query accessToken --output tsv)
    fi
    header Authorization "Bearer ${ACCESS_TOKEN}"
}

arm_system_data_header() {
    header X-Ms-Arm-Resource-System-Data "{\"createdBy\": \"${USER}\", \"createdByType\": \"User\", \"createdAt\": \"$(date -u +"%Y-%m-%dT%H:%M:%S+00:00")\"}"
}

arm_x_ms_identity_url_header() {
    # Requests directly against the frontend
    # need to send a X-Ms-Identity-Url HTTP
    # header, which simulates what ARM performs.
    # By default we set a dummy value, which is
    # enough in the environments where a real
    # Managed Identities Data Plane does not
    # exist like in the development or integration
    # environments. The default can be overwritten
    # by providing the environment variable
    # ARM_X_MS_IDENTITY_URL when running the script.
    : ${ARM_X_MS_IDENTITY_URL:="https://dummyhost.identity.azure.net"}
    header X-Ms-Identity-Url "${ARM_X_MS_IDENTITY_URL}"
}

correlation_headers() {
    if [ -n "$(which uuidgen 2> /dev/null)" ]; then
        header X-Ms-Correlation-Request-Id "$(uuidgen)"
        header X-Ms-Client-Request-Id "$(uuidgen)"
        header X-Ms-Return-Client-Request-Id "true"
    fi
}

async_operation_status() {
    # Arguments:
    # $1 = URL
    # $2 = Headers
    OUTPUT=$(echo "${2}" | curl --silent --header @- ${1})
    STATUS=$(echo $OUTPUT | jq -r '.status')
    echo "${OUTPUT}"
    case ${STATUS} in
        Succeeded | Failed | Canceled)
            return 1
            ;;
        *)
            return 0
            ;;
    esac
}

# Export the function so "watch" can see it.
export -f async_operation_status

rp_request() {
    # Arguments:
    # $1 = HTTP method
    # $2 = URL
    # $3 = Headers
    # $4 = (optional) JSON body
    case ${1} in
        GET)
            CMD="curl --silent --show-error --header @- ${2}"
            ;;
        POST)
            CMD="curl --silent --show-error --include --header @- --request ${1} ${2} --json ''"
            ;;
        *)
            CMD="curl --silent --show-error --include --header @- --request ${1} ${2}"
            if [ $# -ge 4 ]; then
                CMD+=" --json '${4}'"
            fi
            ;;
    esac
    OUTPUT=$(echo "${3}" | eval ${CMD} | tr -d '\r')
    ASYNC_STATUS_ENDPOINT=$(echo "${OUTPUT}" | awk 'tolower($1) ~ /^azure-asyncoperation:/ {print $2}')
    ASYNC_RESULT_ENDPOINT=$(echo "${OUTPUT}" | awk 'tolower($1) ~ /^location:/ {print $2}')

    # If a status endpoint header is present, watch the
    # endpoint until the status reaches a terminal state.
    if [ -n "${ASYNC_STATUS_ENDPOINT}" ]; then
        watch --errexit --exec bash -c "async_operation_status \"${ASYNC_STATUS_ENDPOINT}\" \"${3}\" 2> /dev/null" || true
        if [ -n "${ASYNC_RESULT_ENDPOINT}" ]; then
            FULL_RESULT=$(echo "${3}" | curl --silent --show-error --include --header @- "${ASYNC_RESULT_ENDPOINT}")
            JSON_RESULT=$(echo "${FULL_RESULT}" | tr -d '\r' | jq -Rs 'split("\n\n")[1] | fromjson?')

            # If the response body is JSON, try to extract and write a kubeconfig file.
            KUBECONFIG=$(echo "${JSON_RESULT}" | jq -r '.kubeconfig')
            if [ -n "$KUBECONFIG" ]; then
                echo "${KUBECONFIG}" > aro-cluster.kubeconfig
                echo "Wrote aro-cluster.kubeconfig"
            else
                echo "${FULL_RESULT}"
            fi
        else
            echo "${OUTPUT}"
        fi
    else
        echo "${OUTPUT}"
    fi
}

rp_get_request() {
    # Arguments:
    # $1 = Request URL path
    # $2 = (optional) API version
    URL="${FRONTEND_HOST}${1}?api-version=${2:-${FRONTEND_API_VERSION}}"
    case "${FRONTEND_HOST}" in
        *localhost*)
            HEADERS=$(correlation_headers)
            ;;
        *)
            HEADERS=$(authorization_header)
            ;;
    esac
    rp_request GET "${URL}" "${HEADERS}"
}

rp_put_request() {
    # Arguments:
    # $1 = Request URL path
    # $2 = Request JSON body
    # $3 = (optional) API version
    URL="${FRONTEND_HOST}${1}?api-version=${3:-${FRONTEND_API_VERSION}}"
    case "${FRONTEND_HOST}" in
        *localhost*)
            HEADERS=$(arm_system_data_header; correlation_headers; arm_x_ms_identity_url_header)
            ;;
        *)
            HEADERS=$(authorization_header)
            ;;
    esac
    rp_request PUT "${URL}" "${HEADERS}" "${2}"
}

rp_delete_request() {
    # Arguments:
    # $1 = Request URL path
    # $2 = (optional) API version
    URL="${FRONTEND_HOST}${1}?api-version=${2:-${FRONTEND_API_VERSION}}"
    case "${FRONTEND_HOST}" in
        *localhost*)
            HEADERS=$(arm_system_data_header; correlation_headers)
            ;;
        *)
            HEADERS=$(authorization_header)
            ;;
    esac
    rp_request DELETE "${URL}" "${HEADERS}"
}

rp_post_request() {
    # Arguments:
    # $1 = Request URL path
    # $2 = (optional) API version
    URL="${FRONTEND_HOST}${1}?api-version=${2:-${FRONTEND_API_VERSION}}"
    case "${FRONTEND_HOST}" in
        *localhost*)
            HEADERS=$(arm_system_data_header; correlation_headers)
            ;;
        *)
            HEADERS=$(authorization_header)
            ;;
    esac
    rp_request POST "${URL}" "${HEADERS}"
}

rp_post_request "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RESOURCE_GROUP}/providers/Microsoft.RedHatOpenShift/hcpOpenShiftClusters/${CLUSTER_NAME}/requestAdminCredential"
EOF

Make the script executable and run it:

chmod +x request-aro-admin-credential.sh
./request-aro-admin-credential.sh

The script requests the credential and waits until it is created. When it finishes, a kubeconfig file is written to aro-cluster.kubeconfig in the current directory.

Using the credential

Use the aro-cluster.kubeconfig file to run commands against the ARO cluster. Set the KUBECONFIG environment variable so the OpenShift CLI uses this file:

export KUBECONFIG=$(pwd)/aro-cluster.kubeconfig

Verify access:

oc get clusteroperators

This lists all cluster-level operators in the ARO cluster.

Console client secret

When the external authentication provider was created, both the CLI and Console OAuth clients were specified. For the OpenShift Web Console to use the confidential client, a Secret must be created in the openshift-config namespace.

The Secret must follow the naming convention: <external_auth_name>-console-openshift-console, with the value of the client secret stored in the clientSecret key.

Locating the client secret in RHBK

  1. In the openshift realm, select Clients in the left navigation bar.

  2. Select openshift-console.

  3. Select the Credentials tab.

  4. The secret is in the Client Secrets section.

Creating the secret on the ARO cluster

oc create secret generic aro-hcp-auth-console-openshift-console \
  --namespace openshift-config \
  --from-literal=clientSecret=<openshift_console_client_secret>

Replace <openshift_console_client_secret> with the value from RHBK. After this secret is created, the external authentication integration is complete.

Assigning RBAC policies

With the administrative credential active, assign OpenShift RBAC policies so that identities from the external authentication provider are granted the permissions they need.

A common approach is to grant cluster-admin access to members of the openshift_admins group:

oc apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: openshift-admins
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: openshift_admins
EOF

Additional policies can be created as needed to support other roles and groups from the external authentication provider.

Next steps

With external authentication configured and RBAC policies applied, users can now access the ARO cluster using credentials from the external authentication provider.

See Accessing OpenShift with External Credentials for steps how to access the ROSA HCP cluster using CLI and Web Console.