Creating Custom V3 Ignition Files With Network Configuration for Static IP Addressing.
This post presents the steps for the reader to understand modifying Red Hat CoreOS v3 ignition files. I’ve put this process together because I had to build an automated process to provision Openshift (OCP) 4.6 clusters on VMWare in an environment where DHCP was not available. I had to figure out a way to set custom network configuration for each node in the cluster. This post's information may also be helpful for other usages and requirements other than the specific one shown here.
In OCP 4.6, a new option is available for VMWare to specify the network configuration for the first boot of the machine using a custom “guestinfo” property. However, as this is only used for the first boot and not any preceding boots, a process must be defined to permanently set the network configuration. Therefore, I decided that a nice option would be to bake this network configuration into each node’s ignition file.
The following ignition configuration detailed in this article will only work for ignition v3, which is available from OCP 4.6.
Below is an example of a vanilla v3.1.0 ignition file that's been generated by the Openshift install tool:
{
"ignition": {
"config": {
"merge": [
{
"source": "https://api-int.ocp4.example.com:22623/config/master"
}
]
},
"security": {
"tls": {
"certificateAuthorities": [
{
"source": "data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFRENDQWZpZ0F3SUJBZ0lJSnVMM1IrQUw2aDh3RFFZSktvWklodmNOQVFFTEJRQXdKakVTTUJBR0ExVUUKQ3hNSmIzQmxibk5vYVdaME1SQXdEZ1lEVlFRREV3ZHliMjkwTFdOaE1CNFhEVEl3TVRFeE9URTNNVGsxT0ZvWApEVE13TVRFeE56RTNNVGsxT0Zvd0pqRVNNQkFHQTFVRUN4TUpiM0JsYm5Ob2FXWjBNUkF3RGdZRFZRUURFd2R5CmIyOTBMV05oTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUEyUHFXMnVnZGhoY1MKaExYT0g2eFNFbjF6bXZJTElpQlIwRVNrRkdwdkhobzNoZW44alk1OUx4KzJWWXVqSkg5cHQ5WGFDZ25nUG8yNApaZG03NEt3bW1SakJCUWxqbzVMSUx2MlNnemdxemp2Y0ZMRlpsR2lQekVFbnZNMEFTYnc3R0FrYi9ES1BBdTNFCm1vWlAvVFZsVEdTREJJd1ZhOWJMTnArU21CMlAra2NxRFNibVB5VGdHMnlKanMydTNWTTdZb2JVcFA5U3QwQkQKcm1XVmhYdHNrWDFkQlV5NjNSeWg3aUt4WjBZUXZUYXJMK0haRVJvTUZYamNXU1Axa25NaXI3SWZzYWhQaWNHRApSL3l3N1ZxVFA0dWd0OTUxdVJpeGFpZG15NzZzem01OWRxMkZPYkwxNGxtNEZ1TWdXdHIzS24rYXd6NDJpZGJNCmxGRjRyYU9RZ3dJREFRQUJvMEl3UURBT0JnTlZIUThCQWY4RUJBTUNBcVF3RHdZRFZSMFRBUUgvQkFVd0F3RUIKL3pBZEJnTlZIUTRFRmdRVUwzZVorQkJoSzJWVCs0WFkyb2RiTFg0ZUtsUXdEUVlKS29aSWh2Y05BUUVMQlFBRApnZ0VCQUFhRU9nUm9lc1F1eWZwQmZ4aG15YmRBSENiWVIwMkZBOTZybGtYbXc5b2VXYVVPNFUzSExwdXVhc05SClM1QmtWTVF6UFZ2MGJoemM2ZWxOVmlaTnpvQk5ITkhRZklnQXRBTzFUU1JPV2F2eFhoRjRzOXJ6aVovWnJpbEwKcENPb0kwTEF0UXFKZll3cHNWQk9obFFlRUlyTGtUcUFiL29pVzNKR1NoMWNWTm0wYVFmYVZkbFZqZlZndXNEZQp3b2ZtMnp4aTBvaWkvQlhhYjdnWVJ3WDRmVW1LdFc5eDlIRk5zOXFDU1dvQ3R1U0VTNmJVMHNmQnlUOWs5Z0FoCm9NSzZnMVRHQWhEcnVkYm93RkRTcDZLdDJHbmh6bm1NWGVSSDU3L1RKVUZwbHNmOGR1aGZGZmwxaFRGZEdRVUwKd0hZc2MxTTFLZHliVHVvSkZsMGlGWkp5Uis4PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
}
]
}
},
"version": "3.1.0"
}
}
Custom Network Configuration Requirements
To have static network configuration for each node, each requires its own individual ignitions file. The master and worker nodes can be based on the vanilla master and worker ignition file, respectively.
For the bootstrap VM, the process slightly differs due to the ignition file size being quite big. The usual process is to store this ignition file on a web server and create a second ignition file (normally named `append-bootstrap.ign`) for the bootstrap VM. Its contents will just instruct the machine to pull the real bootstrap ignition file from a web server. This bootstrap node is only booted once, for the bootstrap purpose. It's possible to customise either the real ignition or the 'append' ignition with custom configuration; however, it may not be required in some use cases (for example, setting the guestinfo property on VMware for network configuration on first boot).
For each new individual ignition file, three specific files should be added in the storage section for custom network configuration:
- The first file `/etc/hostname` is the file defining the hostname of the node.
- The second file `/etc/sysconfig/network-scripts/ifcfg-ens192` is the network interface configuration (ifcfg) file defining the network configuration.
- The third file `/etc/chrony.conf` is the network time protocol (NTP) chrony configuration.
Example hostname file:
master0.ocp4.example.com
Example network interface configuration (ifcfg) file
TYPE=Ethernet
NAME="ens192"
DEVICE="ens192"
ONBOOT=yes
NETBOOT=yes
BOOTPROTO=none
IPADDR="10.80.158.7"
NETMASK="255.255.255.240"
GATEWAY="10.80.158.1"
DNS1="10.80.158.5"
Example network time protocol (NTP) chrony file:
server 0.europe.pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
Creating Custom Ignitions
All of the above files have to be in the format of base64 encoded strings for the ignition file. The base64 encoded strings can be generated quite easily using the base64 command line utility tool:
echo '# Generated by sysadm.life
TYPE=Ethernet
NAME="ens192"
DEVICE="ens192"
ONBOOT=yes
NETBOOT=yes
BOOTPROTO=none
IPADDR="10.80.158.7"
NETMASK="255.255.255.240"
GATEWAY="10.80.158.1"
DNS1="10.80.158.5"' | base64 -w0
To decode and check these strings, use the `-d` option of the base64 command:
echo 'IyBHZW5lcmF0ZWQgYnkgc3lzYWRtLmxpZmUKVFlQRT1FdGhlcm5ldApOQU1FPSJlbnMxOTIiCkRFVklDRT0iZW5zMTkyIgpPTkJPT1Q9eWVzCk5FVEJPT1Q9eWVzCkJPT1RQUk9UTz1ub25lCklQQUREUj0iMTAuODAuMTU4LjciCk5FVE1BU0s9IjI1NS4yNTUuMjU1LjI0MCIKR0FURVdBWT0iMTAuODAuMTU4LjEiCkROUzE9IjEwLjgwLjE1OC41Igo=' | base64 -d
The content for each file is specified as a base64 encoded string. These files are defined in the storage section of the ignition configuration. An example of this section for creating one file (the hostname file) is shown below:
"storage": {
"files": [
{
"path": "/etc/hostname",
"mode": 420,
"contents": {
"source": "data:text/plain;charset=utf-8;base64,bWFzdGVyMC5vY3A0LmV4YW1wbGUuY29tCg=="
}
}
]
}
After the completion of new custom ignition files, the configuration should look similar to the example below:
{
"ignition": {
"config": {
"merge": [
{
"source": "https://api-int.ocp4.example.com:22623/config/master"
}
]
},
"security": {
"tls": {
"certificateAuthorities": [
{
"source": "data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFRENDQWZpZ0F3SUJBZ0lJVmFMNjVXRGsvdEl3RFFZSktvWklodmNOQVFFTEJRQXdKakVTTUJBR0ExVUUKQ3hNSmIzQmxibk5vYVdaME1SQXdEZ1lEVlFRREV3ZHliMjkwTFdOaE1CNFhEVEl3TURZeE56RTBNVEkwTkZvWApEVE13TURZeE5URTBNVEkwTkZvd0pqRVNNQkFHQTFVRUN4TUpiM0JsYm5Ob2FXWjBNUkF3RGdZRFZRUURFd2R5CmIyOTBMV05oTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUF2a1J4UjlNcm0wRlMKWjdDbnNkdmswOFllaTJoOGlMMWRyejBldzhMNUp0b0F2MzEzM2J4T1ZUcW96aWtMQlUvVUM3b1I4ejVncGFBYgpEOThCdGY5S2FKQ2kyRHRxYjJmQU9KV042L1NYaC9EUjR5T0pyVjY1V1NIKzZyMlJwOVo1b0FlY2IrUXdybEFCCnh3SHVEbVovdDA0Rm5zVktzK3VMQzZ5SitiTU81Y2c2ek55cmlCblkvY2pZd2NEekZNNkowSnJ2VG54VlpCT0YKM0RiOE1XQU02bEIvUjYrTzBKcktuWU1xZXdzYTBwMVpoSVNZMVBCN3RvQTMxS2NaREh2QTZMVW9jTGduQUYwago1Ylk4cVJITHQrcnNrdkJ0dHhMdzlDbVRLTVl6eDZaVlFtTDVjRjVlSCtkVXpNSVhuaHNpdVZnRmJPRXpDbWlRCjBnc29FQXlVU1FJREFRQUJvMEl3UURBT0JnTlZIUThCQWY4RUJBTUNBcVF3RHdZRFZSMFRBUUgvQkFVd0F3RUIKL3pBZEJnTlXIUTRFRmdRVVY3L0tEK3o2TmRQZCs2azNNUVRnWEF1dnVTUXdEUVlKS29aSWh2Y05BUUVMQlFBRApnZ0VCQUs0a3BRTHlleGdWOHBNVAJZZ04xVlpnY25Mb2tOZFhDdG8vTGZ4UldqRElDc1lLWkt3azVZVUg3eTVaCm4wV2svTG9wcFJlNkdCUnpEVStCS3daUEZpZWM0V1FqZnowcWdUZ0tOOVBiNzVVY1hZSjA5RVJQR3N1Ymo5aWoKVlE2VGxPb3lVYnhiWjFsamo4MW1aVmpPc3BQbFhySEF4amwvNjRNckpDRWYzbFRrSnJnZ3p5d2RvTmVmOGo0ZQpmOFZSaFlUQklLQVlLSlorMnZzZkRmejRRT1NRTXFhVUtIRHB5UUthc01zQ3dCSFBjRmZYWUgzTUFVenFTdE90CnYwRHFyMjdkVmRsOU1QZlJXRHkrK3g2Nk80Y2llUE9rbzY1OVo2U3J1TWZrQXkwMGZOcjZYK09JMEpaTnI3QzUKSTUyUHBFRTdhMmVEdGxDa2NsUkhYT3ZaR3ZRPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
}
]
}
},
"version": "3.1.0"
},
"storage": {
"files": [
{
"path": "/etc/hostname",
"mode": 420,
"contents": {
"source": "data:text/plain;charset=utf-8;base64,bWFzdGVyMC5vY3A0LmV4YW1wbGUuY29tCg=="
}
},
{
"path": "/etc/sysconfig/network-scripts/ifcfg-ens192",
"mode": 420,
"contents": {
"source": "data:text/plain;charset=utf-8;base64,IyBHZW5lcmF0ZWQgYnkgY3JlYXRlaWducyAKVFlQRT1FdGhlcm5ldApOQU1FPSJlbnMxOTIiCkRFVklDRT0iZW5zMTkyIgpPTkJPT1Q9eWVzCk5FVEJPT1Q9eWVzCkJPT1RQUk9UTz1ub25lCklQQUREUj0iMTAuODAuMTU4LjciCk5FVE1BU0s9IjI1NS4yNTUuMjU1LjI0MCIKR0FURVdBWT0iMTAuODAuMTU4LjEiCkROUzE9IjEwLjgwLjE1OC41Ig=="
}
},
{
"path": "/etc/chrony.conf",
"mode": 420,
"contents": {
"source": "data:text/plain;charset=utf-8;base64,c2VydmVyIGNsb2NrLmNvcnAucmVkaGF0LmNvbSBpYnVyc3QKZHJpZnRmaWxlIC92YXIvbGliL2Nocm9ueS9kcmlmdAptYWtlc3RlcCAxLjAgMwpydGNzeW5jCmxvZ2RpciAvdmFyL2xvZy9jaHJvbnkK"
}
}
]
}
}
Ignition files can be validated using a container image provided by the CoreOS team. This example uses podman, but docker could be used, too:
podman run --pull=always --rm -i quay.io/coreos/ignition-validate:release - < myconfig.ign
For each ignition file, a base64 version needs to be created:
base64 -w0 ~/ocp4/master0.ocp4.example.com.ign > ~/ocp4/master0.ocp4.example.com.64
These base64 encoded strings of the whole ignition files are used as part of the VM provisioning. The VM will first boot with the temporary network configuration given by the `guestinfo.afterburn.initrd.network-kargs` parameter. Then, once its initial bootstrap process has completed, the machine will come up with the static IP configuration defined in the ignition.
Bonus: Ansible VM Provisioning (on VMWare)
If the cluster node VMs have not been provisioned by Ansible, create them using the OVA template, and then add the following parameters to the Configuration Parameters:
guestinfo.afterburn.initrd.network-kargs:
Specify the temporary network configuration settings such as:
"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1"
When configuring the VMware option `guestinfo.afterburn.initrd.network-kargs: TRUE`, do not set this in the vAPP area. This MUST be applied to the "Edit Configuration, and on Configuration Parameters click add" option if using the web console.
Suppose you would prefer to provision the nodes with Ansible. In that case, this is what I've used to create the Virtual Machines in VMWare, injecting in both the custom ignition file and the afterburn first boot network configuration property:
---
- name: Ensure master nodes are present
vmware_guest:
hostname: "{{ vhost }}"
username: "{{ vuser }}"
password: "{{ vpassword }}"
datacenter: "{{ vdatacenter }}"
datastore: "{{ vdatastore }}"
validate_certs: false
name: "{{ item.hostname }}"
folder: "{{ vfolder }}"
template: "{{ vm_ova_template }}"
hardware:
memory_mb: "{{ vm_memory_mb[item.role] }}"
num_cpus: "{{ vm_num_cpus[item.role] }}"
networks:
- name: "{{ vnetwork }}"
device_type: vmxnet3
disk:
- size_gb: "{{ vm_disk_size_gb[item.role] }}"
datastore: "{{ vdatastore }}"
autoselect_datastore: true
state: poweredon
vapp_properties:
- id: guestinfo.ignition.config.data.encoding
value: "base64"
- id: guestinfo.ignition.config.data
value: "{{ lookup('file', '{{ cluster_config }}/ignitions/{{ item.hostname }}.ign') | b64encode }}"
customvalues:
- key: disk.EnableUUID
value: "TRUE"
- key: guestinfo.afterburn.initrd.network-kargs
value: "ip= {{ item.ip }}::{{ gateway }}:{{ netmask }}:{{ item.hostname }}:ens192:none nameserver= {{ dns }}"
delegate_to: localhost
with_items: "{{ nodes | selectattr('role','match','control') | list }}"
when: item.role == "master"
Summary, Thoughts, and Troubleshooting
Using this method makes it quite easy to build a higher level of automation to provision virtual machines using the RHCOS OVA in environments with static networking requirements. It significantly lowers the barrier, as you don't have to introduce DHCP (which, depending on the rest of your network, could cause problems), nor do you have to set individual boot configuration using the iso bare-metal installation method. Overall, I quite like this method, as it can make the whole process very slick where normally the limitations of static addressing would prevent this.
References
These resources helped me put all this together:
About the author
James is a consultant at Red Hat with a background in cloud technologies and infrastructure. He is a passionate advocate of open source, the UNIX philosophy, and the Agile manifesto.
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit