Subscribe to our blog

arm Server

In a previous blog, Meet Red Hat Device Edge with MicroShift, we demonstrated how to build an x86 Red Hat Device Edge 8.7 image that included MicroShift 4.12. This blog will build upon those concepts but add a few new twists. First we are going to build the image using the Red Hat Enterprise 9.2 Beta. We will also be using an early candidate release of MicroShift 4.13.0-ec.4. Together we will build another Red Hat Device Edge image but this time instead of x86 based we will make an Arm based image. We will be doing the majority of our work on a Adlink Ampere Altra Developer Platform and leveraging Arm based Kvm virtual machines on the system as our image building host and our pseudo edge device. It should be noted this deployment is still in pre-release form and not yet officially supported.

Lab Setup

Before we cover building and deploying the image let's briefly go over the environment. On the Adlink Ampere Altra Developer Platform KVM host we have 64 cores and 196GB of memory. We have also installed Red Hat Enterprise Linux 9.1. We will configure two KVM virtual machines on our Adlink Ampere Altra Developer Platform. These virtual machines have the same configuration as follows:

  • 8 cores cpu
  • 8GB of memory
  • 120GB of disk space
  • 1 DHCP network interface

One of the virtual machines will have Red Hat Enterprise Linux 9.2 Beta installed on it just as a general server system where we will build our image. The other we will not touch until we have built our image which we will then deploy on that virtual machine. Given we need to build an image let's move onto that process.

Initial Prerequisites

To build our Red Hat Device Edge image we will login to the first virtual machine we have installed Red Hat Enterprise Linux 9.2 Beta on. We need to make sure that we have a few packages installed so that we have the capabilities to use ImageBuilder to build our Arm image. It should be noted that at this time ImageBuilder cannot cross build to different architectures which is why we are doing all of this on an Arm system. First make sure the following repos are available on the RHEL 9.2 Beta virtual machine:

$ sudo yum repolist
Updating Subscription Management repositories.
repo id repo name
rhel-9-for-aarch64-appstream-beta-rpms Red Hat Enterprise Linux 9 for ARM 64 - AppStream Beta (RPMs)
rhel-9-for-aarch64-baseos-beta-rpms Red Hat Enterprise Linux 9 for ARM 64 - BaseOS Beta (RPMs)

Next we need to make sure we have the following packages installed on the host as these will all be needed for the image compose process and building the custom iso.

$ sudo dnf -y install createrepo yum-utils lorax skopeo composer-cli cockpit-composer podman genisoimage isomd5sum xorriso

Then once the required packages are installed we need to enable the cockpit and osbuild-composer services.

$ sudo systemctl enable --now cockpit.socket
Created symlink /etc/systemd/system/sockets.target.wants/cockpit.socket → /usr/lib/systemd/system/cockpit.socket.

$ sudo systemctl enable --now osbuild-composer.socket
Created symlink /etc/systemd/system/sockets.target.wants/osbuild-composer.socket → /usr/lib/systemd/system/osbuild-composer.socket.

 

Image Building Process

Now that we have our prerequisites let us move onto the image building process for building our Red Hat Device Edge Arm with MicroShift. Unlike the previous blog, where we synced down the additional repositories we needed, here we will just download the packages that we will need and create our own custom repository. The following packages will need to be manually downloaded and saved into the microshift-local repository directory we will create.

$ sudo mkdir -p /var/repos/microshift-local
$ sudo ls -l /var/repos/microshift-local/
total 275380
-rw-r--r--. 1 root root 23609949 Apr 7 09:31 cri-o-1.25.2-13.rhaos4.12.git3e4b64e.el9.aarch64.rpm
-rw-r--r--. 1 root root 8094250 Apr 7 09:31 cri-tools-1.25.0-2.el9.aarch64.rpm
-rw-r--r--. 1 root root 50937333 Apr 7 09:31 microshift-4.13.0~ec.4-202303070857.p0.gcf0bce2.assembly.ec.4.el9.aarch64.rpm
-rw-r--r--. 1 root root 24763 Apr 7 09:31 microshift-networking-4.13.0~ec.4-202303070857.p0.gcf0bce2.assembly.ec.4.el9.aarch64.rpm
-rw-r--r--. 1 root root 24074 Apr 7 09:31 microshift-selinux-4.13.0~ec.4-202303070857.p0.gcf0bce2.assembly.ec.4.el9.noarch.rpm
-rw-r--r--. 1 root root 44344876 Apr 7 09:31 openshift-clients-4.12.0-202303240916.p0.g31aa3e8.assembly.stream.el9.aarch64.rpm
-rw-r--r--. 1 root root 2900438 Apr 7 09:31 openvswitch2.17-3.1.0-2.el9.aarch64.rpm
-rw-r--r--. 1 root root 2902801 Apr 7 09:31 openvswitch3.1-3.1.0-2.el9fdp.aarch64.rpm
-rw-r--r--. 1 root root 31205563 Apr 7 09:31 openvswitch3.1-3.1.0-2.el9fdp.src.rpm
-rw-r--r--. 1 root root 15091 Apr 7 09:31 openvswitch-selinux-extra-policy-1.0-31.el9fdp.noarch.rpm
-rw-r--r--. 1 root root 1688774 Apr 7 09:31 python3-docutils-0.16-6.el9.noarch.rpm
-rw-r--r--. 1 root root 18716 Apr 7 09:31 python3-imagesize-1.2.0-6.el9.noarch.rpm
-rw-r--r--. 1 root root 48461 Apr 7 09:31 python3-importlib-metadata-1.7.0-2.el9.noarch.rpm
-rw-r--r--. 1 root root 139052 Apr 7 09:31 python3-jsonschema-4.9.1-1.el9ap.noarch.rpm
-rw-r--r--. 1 root root 2091629 Apr 7 09:31 python3-pygments-2.7.4-4.el9.noarch.rpm
-rw-r--r--. 1 root root 168013 Apr 7 09:31 python3-snowballstemmer-1.9.0-10.el9.noarch.rpm
-rw-r--r--. 1 root root 2365388 Apr 7 09:31 python3-sphinx-3.4.3-5.el9.noarch.rpm
-rw-r--r--. 1 root root 48909 Apr 7 09:31 python3-sphinxcontrib-applehelp-1.0.2-5.el9.noarch.rpm
-rw-r--r--. 1 root root 42776 Apr 7 09:31 python3-sphinxcontrib-devhelp-1.0.2-5.el9.noarch.rpm
-rw-r--r--. 1 root root 52123 Apr 7 09:31 python3-sphinxcontrib-htmlhelp-1.0.3-6.el9.noarch.rpm
-rw-r--r--. 1 root root 18510 Apr 7 09:31 python3-sphinxcontrib-jsmath-1.0.1-12.el9.noarch.rpm
-rw-r--r--. 1 root root 47544 Apr 7 09:31 python3-sphinxcontrib-qthelp-1.0.3-5.el9.noarch.rpm
-rw-r--r--. 1 root root 46697 Apr 7 09:31 python3-sphinxcontrib-serializinghtml-1.1.4-5.el9.noarch.rpm
-rw-r--r--. 1 root root 27501 Apr 7 09:31 python3-sphinx-theme-alabaster-0.7.12-13.el9.noarch.rpm
-rw-r--r--. 1 root root 13848 Apr 7 09:31 python3-zipp-0.5.1-1.el9.noarch.rpm
-rw-r--r--. 1 root root 313875 Apr 7 09:31 tuned-2.20.0-1.2.20230317gitbc41116e.el9fdp.noarch.rpm
-rw-r--r--. 1 root root 36229 Apr 7 09:31 unbound-devel-1.16.2-2.el9.aarch64.rpm
-rw-r--r--. 1 root root 540677 Apr 7 09:31 unbound-libs-1.16.2-2.el9.aarch64.rpm

Now notice in the output above there are some additional packages we pulled with regards to openvswitch3 and python3. These packages were pulled because currently MicroShift requires openvswitch2.17 but that did not come with Red Hat Enterprise Linux 9.2 Beta, only openvswitch3 did. So what we did to work around that temporarily is use the openvswitch3 srpms to build an openvswitch2.17 rpm we could use for this demonstration. The python3 packages were requirements we needed to get the rpmbuild to compile appropriately. This issue is being addressed here.

Now we can use the createrepo command to create a local repository of those packages we just downloaded.

$ sudo createrepo /var/repos/microshift-local
Directory walk started
Directory walk done - 31 packages
Temporary output repo path: /var/repos/microshift-local/.repodata/
Preparing sqlite DBs
Pool started (with 5 workers)
Pool finished

With the repository created we now need to build a repository toml (Tom's Obvious Minimal Language) that defines the packages source.

$ sudo cat << EOF > /var/repos/microshift-local/microshift.toml
id = "microshift-local"
name = "MicroShift local repo"
type = "yum-baseurl"
url = "file:///var/repos/microshift-local/"
check_gpg = false
check_ssl = false
system = false
EOF

Take the toml file we created above and apply it to the osbuild-composer environment by adding it as a source. Once we have added it as a source we can validate what sources are available by listing them out.

$ sudo composer-cli sources add /var/repos/microshift-local/microshift.toml

$ sudo composer-cli sources list
appstream
baseos
microshift-local

Now that we have all the package sources setup for our Red Hat Device Edge MicroShift image we can now begin to construct the toml file that will define our image. In our toml we will define a version, the packages to be included in the image and what services to be enabled.

$ cat << EOF > ~/rhde-microshift.toml
name = "rhde-microshift"
description = "RHDE Microshift Image"
version = "1.0.0"
modules = []
groups = []

[[packages]]
name = "microshift"
version = "*"

[[packages]]
name = "openshift-clients"
version = "*"

[[packages]]
name = "git"
version = "*"

[[packages]]
name = "iputils"
version = "*"

[[packages]]
name = "bind-utils"
version = "*"

[[packages]]
name = "net-tools"
version = "*"

[[packages]]
name = "iotop"
version = "*"

[[packages]]
name = "redhat-release"
version = "*"

[customizations]

[customizations.services]
enabled = ["microshift"]
EOF

The blueprint toml we created above can now be pushed into the osbuild-composer and we can validate it is there by listing the available blueprints.

$ sudo composer-cli blueprints push ~/rhde-microshift.toml

$ sudo composer-cli blueprints list
rhde-microshift

Once the blueprint is pushed up we should be able to compose our image. However when dealing with the Red Hat Enterprise Linux 9.2 Beta I found that the default repos for ImageBuilder pointed to the standard location for 9.2 which have yet to be made available. This will cause an error when trying to compose:

$ sudo composer-cli compose start-ostree rhde-microshift rhel-edge-container
ERROR: DepsolveError: DNF error occurred: RepoError: There was a problem reading a repository: Failed to download metadata for repo '777001b5b86531d37fb976f2d2da8ef6ba2f0130a9a6c1dc30cd8097a052cba3' [baseos: https://cdn.redhat.com/content/dist/rhel9/9.2/aarch64/baseos/os]: Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried

We can however work around this issue by changing the following following file and updating the two content paths accordingly. Or alternately copy the original file into the following path: /etc/osbuild-composer/repositories/ and then make the edits. The latter being the preferred method though I am using the former here.

$ sudo vi /usr/share/osbuild-composer/repositories/rhel-92.json

https://cdn.redhat.com/content/dist/rhel9/9.2/aarch64/baseos/os

to

https://cdn.redhat.com/content/beta/rhel9/9/aarch64/baseos/os

and

https://cdn.redhat.com/content/dist/rhel9/9.2/aarch64/appstream/os

to

https://cdn.redhat.com/content/beta/rhel9/9/aarch64/appstream/os

At this point we are now ready to compose our image by issuing the composer-cli compose command to start building our image. In our case the image will use the rhde-microshift blueprint and will build a rhel-edge-container.

$ sudo composer-cli compose start-ostree rhde-microshift rhel-edge-container
Compose 9d2af85d-2302-4c96-89ab-7ce52585f614 added to the queue

The process of building the image can take some time depending on the system it is run on. We can watch the progress either by running the composer-cli compose status command over and over or place a watch in front of it.

$ sudo composer-cli compose status
ID Status Time Blueprint Version Type Size
9d2af85d-2302-4c96-89ab-7ce52585f614 RUNNING Fri Apr 7 14:05:51 2023 rhde-microshift 1.0.0 edge-container

$ watch sudo composer-cli compose status

Once the image has finished being built we should see a status like the one below.

$ sudo composer-cli compose status
ID Status Time Blueprint Version Type Size
9d2af85d-2302-4c96-89ab-7ce52585f614 FINISHED Fri Apr 7 14:16:31 2023 rhde-microshift 1.0.0 edge-container

Now we need to pull down a local copy of the image so we can work with it by using composer-cli compose image.

$ sudo composer-cli compose image 9d2af85d-2302-4c96-89ab-7ce52585f614
9d2af85d-2302-4c96-89ab-7ce52585f614-container.tar

One the image file is downloaded we next need to copy it into the local container-storage of our host and tag it accordingly. We can also validate it is there by running a podman images command.

$ sudo skopeo copy oci-archive:9d2af85d-2302-4c96-89ab-7ce52585f614-container.tar containers-storage:localhost/rhde-microshift:latest
INFO[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled
INFO[0000] Image operating system mismatch: image uses OS "linux"+architecture "aarch64", expecting one of "linux+arm64"
Getting image source signatures
Copying blob 123b3a439a18 done
Copying config 733a820bb1 done
Writing manifest to image destination


$ sudo podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/rhde-microshift latest 733a820bb1bf 4 hours ago 1.15 GB

Now we will go ahead and start the container locally with podman. We need to do this because we want to extract the contents of the container image.

$ sudo podman run --rm -p 8000:8080 rhde-microshift:latest &
[1] 30045

$ sudo podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b2877a9d741c localhost/rhde-microshift:latest nginx -c /etc/ngi... 9 seconds ago Up 9 seconds 0.0.0.0:8000->8080/tcp compassionate_matsumoto

 

Create Zero Touch Provisioning ISO For Red Hat Device Edge

With our Red Hat Device Edge container image running we now need to create a directory structure that will be the location for the artifacts we need to gather so we can generate a complete zero touch Red Hat Device Edge bootable iso image. First we will create the generate-iso directory and an ostree subdirectory inside. We will copy the repo directory from the running container into this ostree subdirectory. Once we have completed the copy we can stop the container as it will no longer be needed. We can also validate the contents of the ostree/repo directory to confirm it looks like the listing below.

$ mkdir -p ~/generate-iso/ostree

$ sudo podman cp b2877a9d741c:/usr/share/nginx/html/repo ~/generate-iso/ostree

$ sudo podman stop b2877a9d741c
b2877a9d741c

$ sudo ls -l ~/generate-iso/ostree/repo
total 16
-rw-r--r--. 1 root root 38 Apr 7 14:15 config
drwxr-xr-x. 2 root root 6 Apr 7 14:15 extensions
drwxr-xr-x. 258 root root 8192 Apr 7 14:15 objects
drwxr-xr-x. 5 root root 49 Apr 7 14:15 refs
drwxr-xr-x. 2 root root 6 Apr 7 14:15 state
drwxr-xr-x. 3 root root 19 Apr 7 14:15 tmp

Now that we have our Arm based rpm-ostree image staged we can move onto creating a few additional artifacts we need for our zero touch boot iso. The first one we need is the grub.cfg:

$ cat << EOF > ~/generate-iso/grub.cfg
set default="1"

function load_video {
if [ x$feature_all_video_module = xy ]; then
insmod all_video
else
insmod efi_gop
insmod efi_uga
insmod ieee1275_fb
insmod vbe
insmod vga
insmod video_bochs
insmod video_cirrus
fi
}

load_video
set gfxpayload=keep
insmod gzio
insmod part_gpt
insmod ext2

set timeout=60
### END /etc/grub.d/00_header ###

search --no-floppy --set=root -l 'RHEL-9-2-0-BaseOS-aarch64'

### BEGIN /etc/grub.d/10_linux ###
menuentry 'Install Red Hat Enterprise Linux 9.2' --class red --class gnu-linux --class gnu --class os {
linux /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-2-0-BaseOS-aarch64 ro inst.ks=hd:LABEL=RHEL-9-2-0-BaseOS-aarch64:/ks.cfg
initrd /images/pxeboot/initrd.img
}
EOF

For our zero touch provisioning workflow we also need a kickstart file to automate the installation process. The kickstart below is a straight forward example however I want to point out a few things that of interest

  • We are defining the ostreesetup to consume the image that will be built into the iso image we will create.
  • We are enabling the MicroShift firewall rules needed for access.
  • We need to define a pull-secret so we can pull down the additional images when MicroShift starts.
  • We are setting the volume group name for our partitions to rhel which is also the default that LVMS will use in MicroShift.
  • We are also creating a softlink to the MicroShift kubeconfig for both root and bschmaus user
$ cat << EOF > ~/generate-iso/ks.cfg
keyboard --xlayouts='us'
lang en_US.UTF-8
network --bootproto=dhcp --device=link --onboot=on --ipv6=auto --activate
timezone America/Chicago --utc
ignoredisk --only-use=sda
clearpart --none --initlabel
part /boot --fstype="xfs" --ondisk=sda --size=1024
part pv.473 --fstype="lvmpv" --ondisk=sda --size=65544
part /boot/efi --fstype="efi" --ondisk=sda --size=256 --fsoptions="umask=0077,shortname=winnt"
volgroup rhel --pesize=4096 pv.473
logvol / --fstype="xfs" --size=61440 --name=root --vgname=rhel
logvol swap --fstype="swap" --size=4096 --name=swap --vgname=rhel
reboot
text
rootpw --iscrypted --allow-ssh <root encrpyted password here>
user --groups=wheel --name=bschmaus --password=<bschmaus encrypted password here> --iscrypted --gecos="bschmaus"
services --enabled=ostree-remount
ostreesetup --nogpg --url=file:///run/install/repo/ostree/repo --osname=rhel --ref=rhel/9/aarch64/edge

%post --log=/var/log/anaconda/post-install.log --erroronfail

echo -e 'bschmaus\tALL=(ALL)\tNOPASSWD: ALL' >> /etc/sudoers

mkdir -p /etc/crio
cat > /etc/crio/openshift-pull-secret << PULLSECRETEOF
***PUT YOUR PULL-SECRET HERE***
PULLSECRETEOF
chmod 600 /etc/crio/openshift-pull-secret

firewall-offline-cmd --zone=trusted --add-source=10.42.0.0/16
firewall-offline-cmd --zone=trusted --add-source=169.254.169.1
firewall-offline-cmd --zone=public --add-port=6443/tcp
firewall-offline-cmd --zone=public --add-port=80/tcp
firewall-offline-cmd --zone=public --add-port=443/tcp
firewall-cmd --permanent --zone=public --add-port=30000-32767/tcp
firewall-cmd --permanent --zone=public --add-port=30000-32767/udp

mkdir -p /root/.kube
ln -s /var/lib/microshift/resources/kubeadmin/kubeconfig /root/.kube/config
mkdir -p /home/bschmaus/.kube
ln -s /var/lib/microshift/resources/kubeadmin/kubeconfig /root/.kube/config

%end
EOF

Next we need to pull in a Red Hat Enterprise Linux 9.2 Beta boot iso from Red Hat. I am pulling my iso from a location within my lab.

$ scp root@192.168.0.22:/var/lib/libvirt/images/rhel-9.2-beta-aarch64-boot.iso ~/generate-iso

Finally we need to create the recook script. This script will do the dirty work for us in creating our zero touch provisioning iso and packing it with our kickstart and Red Hat Device Edge image we composed. Note the variables in the script have been escaped so it can be copied from the blog into a file without variables being interpreted.

$ cat << EOF > ~/generate-iso/recook.sh
#!/bin/bash
# Ensure this script is run as root
if [ "\$EUID" != "0" ]; then
echo "Please run as root" >&2
exit 1
fi

# Set a few bash options
cd "\$(dirname "\$(realpath "\$0")")"
set -ex

# Create a temp dir
tmp=\$(mktemp -d)
mkdir "\$tmp/iso"

# Mount the boot iso into our temp dir
mount rhel-9.2-beta-aarch64-boot.iso "\$tmp/iso"

# Create a directory for our new ISO
mkdir "\$tmp/new"

# Copy the contents of the boot ISO to our new directory
cp -a "\$tmp/iso/" "\$tmp/new/"

# Unmount the boot ISO
umount "\$tmp/iso"

# Copy our customized files into the new ISO directory
cp ks.cfg "\$tmp/new/iso/"
cp grub.cfg "\$tmp/new/iso/EFI/BOOT/"
cp -r ostree "\$tmp/new/iso/"

# Push directory of new ISO for later commands
pushd "\$tmp/new/iso"

# Create our new ISO
xorriso -as mkisofs -V 'RHEL-9-2-0-BaseOS-aarch64' -r -o ../rhde-ztp.iso -J -joliet-long -cache-inodes -efi-boot-part --efi-boot-image -e images/efiboot.img -no-emul-boot .

implantisomd5 ../rhde-ztp.iso

# Return to previous directory
popd

# Cleanup and give user ownership of ISO
mv "\$tmp/new/rhde-ztp.iso" ./
rm -rf "\$tmp"
chown \$(stat -c '%U:%G' .) ./rhde-ztp.iso
EOF

$ chmod 755 ~/generate-iso/recook.sh

Let's now confirm that our directory structure looks correct. We should have two config files, a script, our ostree directory with the image contents in it and Red Hat Enterprise Linux 9.2 Beta source iso.

$ cd ~/generate-iso
$ ls -lart
total 2630068
drwxr-xr-x. 3 root root 18 Apr 7 14:22 ostree
-rw-r--r--. 1 root root 853006336 Apr 7 14:29 rhel-9.2-beta-aarch64-boot.iso
-rwxr-xr-x. 1 root root 1407 Apr 7 15:29 recook.sh
-rw-r--r--. 1 root root 752 Apr 7 15:33 grub.cfg
-rw-r--r--. 1 root root 4542 Apr 8 18:13 ks.cfg

At this point if everything looks good from the directory structure layout we should now be able to generate our zero touch Red Hat Device Edge iso using the recook script we created in a few steps above.

$ sudo ./recook.sh 
++ mktemp -d
+ tmp=/tmp/tmp.yB4mEW6FUz
+ mkdir /tmp/tmp.yB4mEW6FUz/iso
+ mount rhel-9.2-beta-aarch64-boot.iso /tmp/tmp.yB4mEW6FUz/iso
mount: /tmp/tmp.yB4mEW6FUz/iso: WARNING: source write-protected, mounted read-only.
+ mkdir /tmp/tmp.yB4mEW6FUz/new
+ cp -a /tmp/tmp.yB4mEW6FUz/iso/ /tmp/tmp.yB4mEW6FUz/new/
+ umount /tmp/tmp.yB4mEW6FUz/iso
+ cp ks.cfg /tmp/tmp.yB4mEW6FUz/new/iso/
+ cp grub.cfg /tmp/tmp.yB4mEW6FUz/new/iso/EFI/BOOT/
+ cp -r ostree /tmp/tmp.yB4mEW6FUz/new/iso/
+ pushd /tmp/tmp.yB4mEW6FUz/new/iso
/tmp/tmp.yB4mEW6FUz/new/iso ~/generate-iso
+ xorriso -as mkisofs -V RHEL-9-2-0-BaseOS-aarch64 -r -o ../rhde-ztp.iso -J -joliet-long -cache-inodes -efi-boot-part --efi-boot-image -e images/efiboot.img -no-emul-boot .
xorriso 1.5.4 : RockRidge filesystem manipulator, libburnia project.

Drive current: -outdev 'stdio:../rhde-ztp.iso'
Media current: stdio file, overwriteable
Media status : is blank
Media summary: 0 sessions, 0 data blocks, 0 data, 45.4g free
xorriso : WARNING : -volid text does not comply to ISO 9660 / ECMA 119 rules
xorriso : NOTE : -as mkisofs: Ignored option '-cache-inodes'
xorriso : UPDATE : 28600 files added in 1 seconds
Added to ISO image: directory '/'='/tmp/tmp.yB4mEW6FUz/new/iso'
xorriso : UPDATE : 30833 files added in 1 seconds
xorriso : UPDATE : 30833 files added in 1 seconds
xorriso : UPDATE : 1.86% done
xorriso : UPDATE : 32.56% done
xorriso : UPDATE : 52.86% done
xorriso : UPDATE : 62.29% done, estimate finish Sun Apr 09 10:35:29 2023
xorriso : UPDATE : 72.98% done, estimate finish Sun Apr 09 10:35:30 2023
xorriso : UPDATE : 90.36% done
ISO image produced: 898514 sectors
Written to medium : 898514 sectors at LBA 0
Writing to 'stdio:../rhde-ztp.iso' completed successfully.

+ implantisomd5 ../rhde-ztp.iso
Inserting md5sum into iso image...
md5 = ff42112cd2e7501c7ca21affa9a1b261
Inserting fragment md5sums into iso image...
fragmd5 = bcd4bb8223e1162623e881a2c8548de7cfffea173483e8d9a48795365d88
frags = 20
Setting supported flag to 0
+ popd
~/generate-iso
+ mv /tmp/tmp.yB4mEW6FUz/new/rhde-ztp.iso ./
+ rm -rf /tmp/tmp.yB4mEW6FUz
++ stat -c %U:%G .
+ chown bschmaus:bschmaus ./rhde-ztp.iso

Once the script is completed we should have a rhde-ztp.iso in our directory.

$ ls -l rhde-ztp.iso 
-rw-r--r--. 1 bschmaus bschmaus 1840156672 Apr 9 10:35 rhde-ztp.iso

 

Boot Zero Touch Provisioning ISO for Red Hat Device Edge with MicroShift

Take the iso and either write it onto a usb drive or copy it to a hypervisor where the Arm virtual machine can consume it. I am doing the latter for this demonstration. In the previous blog we showed a video of the device booting up and the kickstart configuration doing the heavy lifting. Since we would be seeing the same thing in the previous video on this time it is on Arm I will defer the video of that process.

Once the edge virtual machine has rebooted we should be able to login into the host and confirm MicroShift is fully operational.

$ ssh bschmaus@192.168.0.130
The authenticity of host '192.168.0.130 (192.168.0.130)' can't be established.
ECDSA key fingerprint is SHA256:zK97YMexGaHYXP1+OBSi+i7d0Z+/R87gaFX4vppUD2k.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.0.130' (ECDSA) to the list of known hosts.
bschmaus@192.168.0.130's password:
Script '01_update_platforms_check.sh' FAILURE (exit code '1'). Continuing...
Boot Status is GREEN - Health Check SUCCESS
Last login: Sat Apr 8 18:23:17 2023

$ cat /etc/redhat-release
Red Hat Enterprise Linux release 9.2 Beta (Plow)
$ uname -a
Linux adlink-vm3.schmaustech.com 5.14.0-283.el9.aarch64 #1 SMP PREEMPT_DYNAMIC Thu Feb 23 19:37:21 EST 2023 aarch64 aarch64 aarch64 GNU/Linux

$ oc get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
openshift-dns pod/dns-default-rflw5 2/2 Running 3 38h
openshift-dns pod/node-resolver-95lcl 1/1 Running 1 38h
openshift-ingress pod/router-default-64fc9949cd-tbj2d 1/1 Running 2 38h
openshift-ovn-kubernetes pod/ovnkube-master-ppwj8 4/4 Running 7 38h
openshift-ovn-kubernetes pod/ovnkube-node-zbnhd 1/1 Running 3 (16m ago) 38h
openshift-service-ca pod/service-ca-67df7c6965-bzv4v 1/1 Running 1 38h
openshift-storage pod/topolvm-controller-59974b64d9-thj8z 4/4 Running 4 38h
openshift-storage pod/topolvm-node-w9kp8 4/4 Running 9 (16m ago) 38h

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 38h
openshift-dns service/dns-default ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9154/TCP 38h
openshift-ingress service/router-internal-default ClusterIP 10.43.212.134 <none> 80/TCP,443/TCP,1936/TCP 38h

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
openshift-dns daemonset.apps/dns-default 1 1 1 1 1 kubernetes.io/os=linux 38h
openshift-dns daemonset.apps/node-resolver 1 1 1 1 1 kubernetes.io/os=linux 38h
openshift-ovn-kubernetes daemonset.apps/ovnkube-master 1 1 1 1 1 kubernetes.io/os=linux 38h
openshift-ovn-kubernetes daemonset.apps/ovnkube-node 1 1 1 1 1 kubernetes.io/os=linux 38h
openshift-storage daemonset.apps/topolvm-node 1 1 1 1 1 <none> 38h

NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
openshift-ingress deployment.apps/router-default 1/1 1 1 38h
openshift-service-ca deployment.apps/service-ca 1/1 1 1 38h
openshift-storage deployment.apps/topolvm-controller 1/1 1 1 38h

NAMESPACE NAME DESIRED CURRENT READY AGE
openshift-ingress replicaset.apps/router-default-64fc9949cd 1 1 1 38h
openshift-service-ca replicaset.apps/service-ca-67df7c6965 1 1 1 38h
openshift-storage replicaset.apps/topolvm-controller-59974b64d9 1 1 1 38h

We have confirmed MicroShift is fully functional here and ready to deploy workloads. Hopefully this blog provides an idea of what the workflow process looks like with Red Hat Device Edge on Arm with Red Hat Enterprise Linux 9.2 and MicroShift 4.13. The process looks fairly similar to the x86 workflow but there are a few nuances as pointed out in this blog when dealing with Arm.


About the author

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech