While installing an OpenShift cluster on a cloud isn’t difficult, the old school developer in me wants as much of my environment as possible to be in my house and fully under my control. I have some spare hardware in my basement that I wanted to use as an OpenShift 4 installation, but not enough to warrant a full blown cluster.

CodeReady Containers, or CRC for short, is perfect for that. Rather than try to rephrase what it is, I’ll just copy it directly from their site:

“CodeReady Containers brings a minimal, preconfigured OpenShift 4.1 or newer cluster to your local laptop or desktop computer for development and testing purposes. CodeReady Containers is delivered as a Red Hat Enterprise Linux virtual machine that supports native hypervisors for Linux, macOS, and Windows 10.”

The hiccup is that while my server is in my basement, I don’t want to have to physically sit at the machine to use it. Since CRC is deployed as a virtual machine, I needed a way to get to that VM from any other machine on my home network. This blog talks about how to configure HAProxy on the host machine to allow access to CRC from elsewhere on the network.

I ran the following steps on a CentoOS 8 installation, but they should work on any of the supported Linux distributions. You’ll also need some form of DNS resolution between your client machines and the DNS entries that CRC expects. In my case, I use a Pi-hole installation running on a Raspberry Pi (which effectively uses dnsmasq as described later in this post).

It’ll become obvious very quickly when you read this, but you’ll need sudo access on the CRC host machine.

Running CRC

The latest version of CRC can be downloaded from Red Hat’s site. You’ll need to download two things:

  • The crc binary itself, which is responsible for the management of the CRC virtual machine
  • Your pull secret, which is used during creation; save this in a file somewhere on the host machine

This blog isn’t going to go into the details of setting up CRC. Detailed information can be found in the Getting Started Guide in the CRC documentation.

That said, if you’re looking for a TL;DR version of that guide, it boils down to:

crc setup
crc start -p 

Make sure CRC is running on the destination machine before continuing, since we’ll need the IP address that the VM is running on.

Configuring the Host Machine

We’ll use firewalld and HAProxy to route the host’s inbound traffic to the CRC instance. Before we can configure that, we’ll need to install a few dependencies:

sudo dnf -y install haproxy policycoreutils-python-utils

Configuring the Firewall

The CRC host machine needs to allow inbound connections on a variety of ports used by OpenShift. The following commands configure the firewall to open up those ports:

sudo systemctl start firewalld
sudo firewall-cmd --add-port=80/tcp --permanent
sudo firewall-cmd --add-port=6443/tcp --permanent
sudo firewall-cmd --add-port=443/tcp --permanent
sudo systemctl restart firewalld
sudo semanage port -a -t http_port_t -p tcp 6443

Configuring HA Proxy

Once the firewall is configured to allow traffic into the server, HAProxy is used to forward it to the CRC instance. Before we can configure that, we’ll need to know the IP of the server itself, as well as the IP of the CRC virtual machine:

export SERVER_IP=$(hostname --ip-address)
export CRC_IP=$(crc ip)

Note: If your server is running DHCP, you’ll want to take steps to ensure its IP doesn’t change, either by changing it to run on a static IP or by configuring DHCP reservations. Instructions for how to do that are outside the scope of this blog, but chances are if you’re awesome enough to want to set up a remote CRC instance, you know how to do this already.

We’re going to replace the default haproxy.cfg file, so to be safe, create a backup copy:

cd /etc/haproxy
sudo cp haproxy.cfg haproxy.cfg.orig

Replace the contents of the haproxy.cfg file with the following:

global
debug

defaults
log global
mode http
timeout connect 0
timeout client 0
timeout server 0

frontend apps
bind SERVER_IP:80
bind SERVER_IP:443
option tcplog
mode tcp
default_backend apps

backend apps
mode tcp
balance roundrobin
option ssl-hello-chk
server webserver1 CRC_IP check

frontend api
bind SERVER_IP:6443
option tcplog
mode tcp
default_backend api

backend api
mode tcp
balance roundrobin
option ssl-hello-chk
server webserver1 CRC_IP:6443 check

Note: Generally speaking, setting the timeouts to 0 is a bad idea. In this context, we set those to keep websockets from timing out. Since you are (or rather, “should”) be running CRC in a development environment, this shouldn’t be quite as big of a problem.

You can either manually change the instances of SERVER_IP and CRC_IP as appropriate, or run the following commands to automatically perform the replacements:

sudo sed -i "s/SERVER_IP/$SERVER_IP/g" haproxy.cfg
sudo sed -i "s/CRC_IP/$CRC_IP/g" haproxy.cfg

Once that’s finished, start HAProxy:

sudo systemctl start haproxy

Configuring DNS for Clients

As I said earlier, your client machines will need to be able to resolve the DNS entries used by CRC. This will vary depending on how you handle DNS. One possible option is to use dnsmasq on your client machine.

Before doing that, you’ll need to update NetworkManager to use dnsmasq. This is done by creating a new NetworkManager config file:

cat << EOF > /tmp/00-use-dnsmasq.conf
[main]
dns=dnsmasq
EOF

sudo mv /tmp/00-use-dnsmasq.conf /etc/NetworkManager/conf.d/00-use-dnsmasq.conf

You’ll also need to add DNS entries for the CRC server:

cat << EOF > /tmp/01-crc.conf
address=/apps-crc.testing/SERVER_IP
address=/api.crc.testing/SERVER_IP
EOF

sudo mv /tmp/01-crc.conf /etc/NetworkManager/dnsmasq.d/01-crc.conf

Again, you can either manually enter the IP of the host machine or use the following commands to replace it:

sudo sed -i "s/SERVER_IP/$SERVER_IP/g" /etc/NetworkManager/dnsmasq.d/01-crc.conf

Once the changes have been made, restart NetworkManager:

sudo systemctl reload NetworkManager

Accessing CRC

The crc binary provides subcommands for discovering the authentication information to access the CRC instance:

crc console --url
https://console-openshift-console.apps-crc.testing

crc console --credentials
To login as a regular user, run 'oc login -u developer -p developer https://api.crc.testing:6443'.
To login as an admin, run 'oc login -u kubeadmin -p mhk2X-Y8ozE-9icYb-uLCdV https://api.crc.testing:6443'

The URL from the first command will access the web console from any machine with the appropriate DNS resolution configured. The login credentials can be determined from the output of the second command.

To give credit where it is due, much of this information came from this gist by Trevor McKay.

Happy Coding :)

Try OpenShift 4 for free

저자 소개

UI_Icon-Red_Hat-Close-A-Black-RGB

채널별 검색

automation icon

오토메이션

기술, 팀, 인프라를 위한 IT 자동화 최신 동향

AI icon

인공지능

고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트

open hybrid cloud icon

오픈 하이브리드 클라우드

하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요

security icon

보안

환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보

edge icon

엣지 컴퓨팅

엣지에서의 운영을 단순화하는 플랫폼 업데이트

Infrastructure icon

인프라

세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보

application development icon

애플리케이션

복잡한 애플리케이션에 대한 솔루션 더 보기

Virtualization icon

가상화

온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래