Iscriviti al feed

Installing OpenShift Container Platform (OCP) in and of itself can often be a challenging thing. Taking the process a step further and installing OCP in a disconnected environment where your OCP nodes may not even have internet access further complicates the matter. In the following article, the process for installing OCP utilizing a Satellite6 server will be discussed in full.

The Satellite server will sync the OCP containers needed for the installation and the OCP inventory will be modified to point to the Satellite server location. Lastly, the default image stream names will be updated to point towards the Satellite 6 server for future application deployments.

Before the disconnected installation can be deployed the Red Hat Satellite Server will need the OpenShift repos synced and configured from the Red Hat (CDN) content delivery network.

The following article describes how to install and configure Satellite 6 server for installing OpenShift Container Platform:

Using Satellite 6 Server for OpenShift Container Platform Node Preparation

In both the reference architecture for VMware vSphere and Red Hat Virtualization (RHV), the Red Hat Subscription Management (RHSM) module is used for registering nodes to either Red Hat's Content Delivery Network (CDN) or an internal Satellite server.

The following vars are used for either installation:

Connected Install

Disconnected Install







OpenShift Required Containers for Installation

OpenShift uses the following containers for its base installation:




Default router implementation for OpenShift Container Platform environment. Supports HTTP, HTTPS (with SNI), WebSockets, and TLS with SNI.


Web console for the Atomic Registry, an open source enterprise container image registry based on the OpenShift and Cockpit projects.


Default deployer pod for OpenShift. Handles scaling down the old replication controller, scaling up the new one, running hooks, as well as capturing logs from the deployment process.


Infrastructure pod used to reserve resources in a Kubernetes cluster


Supports the V2 Docker Registry API. Includes authentication and authorization support for OpenShift Container Platform.


There containers must be located on the internal Satellite Server that the disconnected installation will be based off of. Also, the ose repos must be available on the Satellite 6 Server as well:

[root@master-0 ~]# yum repoinfo rhel-7-server-ose-3.6-rpms
Loaded plugins: enabled_repos_upload, package_upload, product-id, search-disabled-repos, subscription-manager
Repo-id : rhel-7-server-ose-3.6-rpms/x86_64
Repo-name : Red Hat OpenShift Container Platform 3.6 (RPMs)
Repo-status : enabled
Repo-revision: 1506113587
Repo-updated : Fri Sep 22 20:53:07 2017
Repo-pkgs : 503
Repo-size : 622 M
Repo-baseurl :
Repo-expire : 1 second(s) (last: Thu Nov 2 14:54:41 2017)
Filter : read-only:present
Repo-excluded: 20
Repo-filename: /etc/yum.repos.d/redhat.repo

repolist: 503
Uploading Enabled Repositories Report
Loaded plugins: product-id

Preparing Satellite 6 for a disconnected installation

To prepare the Satellite 6.2 installation for deploying OpenShift packages and OpenShift container images start by runningthe following Python script:

This script is to be executed on the Satellite Server to be used for the disconnected installation.

Note: The Satellite 6 server should be able to reach to pull from its registry.

./ --password admin_pass

First, this will query the Red Hat container registry for all openshift3 images then create a product and then repositories for the images.

[root@master-0 ~]# curl -s"openshift3" | python -mjson.tool | grep ".name.:" | cut -d: -f2 | sed -e "s/ "//g"" -e "s/,"//g""
... content abbreviated ...

Next, the script will supply the appropriate installation vars to insert into the OCP install playbooks:

openshift_disable_check: "docker_image_availability"
openshift_docker_insecure_registries: ""
openshift_docker_additional_registries: ""
openshift_examples_modify_imagestreams: True

Note the installation vars also bypass the image check availability. At the time of this writing, the skopeo check is appending the registry name twice.

Please see this bugzilla for details:

Lastly, the script synchronizes the repos with the content imported from This step can be time consuming.

Verifying the installation was successful

Verify the default image stream location has been modified:

[root@master-0 ~]# oc get is -n openshift
dotnet docker-registry.default.svc:5000/openshift/dotnet 2.0,1.1,1.0
dotnet-runtime docker-registry.default.svc:5000/openshift/dotnet-runtime 2.0
... content abbreviated ...

On the cluster's infra or app nodes, query the Docker-formatted container images to see the image tags pointing to the Satellite 6 registry:

[root@infra-0 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE v3. 00e38cdddcde 8 weeks ago 988.8 MB v3. 89fd398a337d 8 weeks ago 970.2 MB v3. 6a83937f497f 8 weeks ago 1.058 GB v3. 63accd48a0d7 8 weeks ago 208.6 MB

Lastly, verify the installation took place properly by deploying a new application with the new image streams. Make sure the application deployed has had the applicable repositories set up inside Satellite server.

Troubleshooting a Failed Installation

If the deployment is having issues, manually pull the Docker-formatted container images to test:

If the Docker-formatted container images pull successfully, connectivity to Satellite should be fine.

docker pull


In summary, the installation of OpenShift via Satellite 6 with a disconnected installation was greatly simplified with some automation work done ahead of time. The script will do the bulk of the work on the Satellite deployment and provide the required variables for installing OCP in a disconnected environment.

Manual Steps to Perform

The following steps can be issued manually to skip usage of the script:

  • Create the product.
  • Create the repositories and assign them to the product.
  • Synchronize the product to pull down the images.
hammer product create --name "ocp36" --organization "e2e"

hammer repository create --name "openshift3/ose-haproxy-router" --content-type "docker" --url "" --docker-upstream-name "openshift3/ose-haproxy-router" --product "ocp36" --organization "e2e"
hammer repository create --name "openshift3/registry-console" --content-type "docker" --url "" --docker-upstream-name "openshift3/registry-console" --product "ocp36" --organization "e2e"
hammer repository create --name "openshift3/ose-deployer" --content-type "docker" --url "" --docker-upstream-name "openshift3/ose-deployer" --product "ocp36" --organization "e2e"
hammer repository create --name "openshift3/ose-pod" --content-type "docker" --url "" --docker-upstream-name "openshift3/ose-pod" --product "ocp36" --organization "e2e"
hammer repository create --name "openshift3/ose-docker-registry" --content-type "docker" --url "" --docker-upstream-name "openshift3/ose-docker-registry" --product "ocp36" --organization "e2e"

hammer product synchronize --name "ocp36" --organization "e2e"


Ricerca per canale

automation icon


Novità sull'automazione IT di tecnologie, team e ambienti

AI icon

Intelligenza artificiale

Aggiornamenti sulle piattaforme che consentono alle aziende di eseguire carichi di lavoro IA ovunque

open hybrid cloud icon

Hybrid cloud open source

Scopri come affrontare il futuro in modo più agile grazie al cloud ibrido

security icon


Le ultime novità sulle nostre soluzioni per ridurre i rischi nelle tecnologie e negli ambienti

edge icon

Edge computing

Aggiornamenti sulle piattaforme che semplificano l'operatività edge

Infrastructure icon


Le ultime novità sulla piattaforma Linux aziendale leader a livello mondiale

application development icon


Approfondimenti sulle nostre soluzioni alle sfide applicative più difficili

Original series icon

Serie originali

Raccontiamo le interessanti storie di leader e creatori di tecnologie pensate per le aziende