Subscribe to the feed

I thought I’d share how I’ve automated a large portion of the deployment of OpenShift 4.4 on bare metal from Packet. I did this rather quickly, so your mileage may vary. You should always consider using the official documentation if you are doing something serious! I’m assuming you have:

  • SSH keys configured in Packet
  • A domain registered in AWS Route53 (feel free to use your favorite DNS service)
  • Access to OpenShift subscriptions

I used the Parsippany, USA (EWR1) datacenter, but this should work with any datacenter.

First, deploy the following in EWR1:

  • x1.small.x86 ($0.40/hour)
  • Operating System = Licensed – RHEL 7

This node will act as our “helper”. This is not to be confused with the bootstrap node for deploying OpenShift. We will deploy that later. The “helper” will be where we run the script to get everything ready to go.

Once x1.small.x86 is up and running ssh to it and download the scripts (git isn’t installed by default).

# wget

# wget

# wget

# wget
# chmod +x *.sh

Now download your pull-secret from the OpenShift Install Page and drop it into your current working directory as pull-secret.txt. After that, run the script and pass it three arguments:

  • The pool ID to use that contains the OpenShift subscriptions.
  • The domain name ( below)
  • The sub-domain name and/or cluster name (test below)
# ./ 8a85f99c6f0fa8e3016f19db8d17768e test

This will take a little bit to run and it does a lot of things. You can view the script if you want to see everything it does. In the end, if everything worked you should see this:

==== create manifests
INFO Consuming Install Config from target directory
WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
INFO Consuming Worker Machines from target directory
INFO Consuming Openshift Manifests from target directory
INFO Consuming OpenShift Install (Manifests) from target directory
INFO Consuming Common Manifests from target directory
INFO Consuming Master Machines from target directory
==== Create publicly accessible directory, Copy ignition files, Create iPXE files
==== all done, you can now iPXE servers to:

Your IP address will be different of course. As you can see, you are provided with the iPXE boot URLs for the bootstrap, master, and worker nodes. Now you can boot the following in Packet.

  • bootstrap – c2.medium.x86 – custom iPXE – use the bootstrap.boot URL above
  • master0 – c2.medium.x86 – custom iPXE – use the master.boot URL above
  • master1 – c2.medium.x86 – custom iPXE – use the master.boot URL above
  • master2 – c2.medium.x86 – custom iPXE – use the master.boot URL above
  • worker1 – c2.medium.x86 – custom iPXE – use the worker.boot URL above
  • worker2 – c2.medium.x86 – custom iPXE – use the worker.boot URL above

As those boot, you’ll need to get those IP addresses into Amazon Route53 and also change haproxy to have the right IP addresses.

Here are the changes to Route53 I made (as an example)

DNS Entries in Route53

For editing haproxy you can just edit the values in the and run the script.

# vi
<assign IP addresses>
# ./

Now you can watch and wait to see if the deployment returns

# ./openshift-install --dir=packetinstall wait-for bootstrap-complete --log-level=info 
INFO Waiting up to 20m0s for the Kubernetes API at;

It should look like this if it succeeds

# ./openshift-install --dir=packetinstall wait-for bootstrap-complete --log-level=info
INFO Waiting up to 20m0s for the Kubernetes API at
INFO API v1.17.1 up
INFO Waiting up to 40m0s for bootstrapping to complete...
INFO It is now safe to remove the bootstrap resources

Once it returns you can remove the bootstrap server (or comment it out) from /etc/haproxy/haproxy.cfg and restart haproxy.

# vi /etc/haproxy/haproxy.cfg
<comment out bootstrap node>
# systemctl restart haproxy.service

Then you can source your kubeconfig and be on your way.

# export KUBECONFIG=/root/packetinstall/auth/kubeconfig
# ./oc whoami

You can get the nodes and see that the masters are there.

# ./oc get nodes

The workers will not be there because you need to approve their Certificate Signining Requests (CSR).

# ./oc get csr

You can approve the pending requests quickly like this.

# ./oc get csr -o go-template='{{range .items}}{{if not .status}}{{}}{{"\n"}}{{end}}{{end}}' | xargs ./oc adm certificate approve

Now you should be able to point your browser at the OpenShift console located at where test = cluster name and = basedomain or $2 and $3 from your command at the start.

If you want to enable an image registry quickly you can do that by running Note that this is not meant for production use as it uses local storage.

# ./

If you want to create some persistent volumes you can run the script. It will create four persistent volumes on the NFS directory that is exported from the helper node.

# ./

Now you can download the RHEL 8.1 guest image, upload it to /var/www/html on the helper node and get to deploying some VMs on OpenShift Virtualization!

About the author


Browse by channel

automation icon


The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon


The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon


The latest on the world’s leading enterprise Linux platform

application development icon


Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech