In Part 1, we created a working BIND container with local data storage. We can make changes on the local system that will get picked up in the running container. In this part, we’ll explore how we can manage the service from the host with systemctl
.
In Part 1, we installed two OCI hooks, oci-register-machine
and oci-systemd-hook
. OCI hooks are executed either before the container process is executed (prestart) or after shut down (poststop). These two hooks provide the integration points to systemd on the host. We used machinectl
in the previous post to copy files out of our test container, but we didn’t look at what else this integration can do for us.
Using machinectl list
, we see what containers systemd knows about. The MACHINE field holds the machine-id created by oci-register-machine
from the container ID. We can use that ID with both machinectl
and systemctl
.
With machinectl status
we can see some interesting things about our container, like IP address, OS release info, and the systemd hierarchy running. Since this is from the host view, you’ll see the host PID for /sbin/init
not PID 1.
[root@rhel7-host bind]# machinectl status 47d2c93035cafb1174dedd924cfa4308 47d2c93035cafb1174dedd924cfa4308(47d2c93035cafb1174dedd924cfa4308) Since: Mon 2017-05-15 12:01:19 EDT; 3h 58min ago Leader: 2734 (systemd) Service: docker; class container Root: /var/lib/docker/devicemapper/mnt/5d897b599c6b003df74968f73ad56 Address: 172.17.0.2 fe80::42:acff:fe11:2 OS: Red Hat Enterprise Linux Server 7.3 (Maipo) Unit: docker-47d2c93035cafb1174dedd924cfa4308ebb8b924bc0b0b661f60ffe ├─2734 /sbin/init └─system.slice ├─named.service │ └─2788 /usr/sbin/named -u named ├─dbus.service │ └─2777 /bin/dbus-daemon --system --address=systemd: --nofo └─systemd-journald.service └─2766 /usr/lib/systemd/systemd-journald
CONTAINER SYSTEMD
We can also get status in the container via systemctl
on the host by specifying the -M
option. You can query the overall systemd status, the process status, and interact with the process as if it were on the host.
To see the internal container state and check for failed or queued jobs:
[root@rhel7-host bind]# systemctl -M 47d2c93035cafb1174dedd924cfa4308 status
To see the status of the named
service in container:
[root@rhel7-host bind]# systemctl -M 47d2c93035cafb1174dedd924cfa4308 status named
To see what actions are available to us via the unit file, you can use show
service. Let’s grep for Exec to see what’s available.
[root@rhel7-host bind]# systemctl -M 47d2c93035cafb1174dedd924cfa4308 show named | grep Exec ExecStart={ path=/usr/sbin/named ; argv[]=/usr/sbin/named -u named $OPTIONS ; ignore_errors=no ; start_time=[Mon 2017-05-15 12:01:20 EDT] ; stop_time=[Mon 2017-05-15 12:01:20 EDT] ; pid=26 ; code=exited ; status=0 } ExecReload={ path=/bin/sh ; argv[]=/bin/sh -c /usr/sbin/rndc reload > /dev/null 2>&1 || /bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } ExecStop={ path=/bin/sh ; argv[]=/bin/sh -c /usr/sbin/rndc stop > /dev/null 2>&1 || /bin/kill -TERM $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
The named
unit file exposes a start, stop, and reload action. We also can use reload-or-restart
and let systemd determine what to do instead of searching the show
output.
[root@rhel7-host bind]# systemctl -M 47d2c93035cafb1174dedd924cfa4308 reload-or-restart named [root@rhel7-host bind]# systemctl -M 47d2c93035cafb1174dedd924cfa4308 status named named.service - Berkeley Internet Name Domain (DNS) Loaded: loaded (/usr/lib/systemd/system/named.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2017-05-15 12:01:20 EDT; 4h 32min ago Process: 54 ExecReload=/bin/sh -c /usr/sbin/rndc reload > /dev/null 2>&1 || /bin/kill -HUP $MAINPID (code=exited, status=0/SUCCESS) Process: 26 ExecStart=/usr/sbin/named -u named $OPTIONS (code=exited, status=0/SUCCESS) Process: 24 ExecStartPre=/bin/bash -c if [ ! "$DISABLE_ZONE_CHECKING" == "yes" ]; then /usr/sbin/named-checkconf -z /etc/named.conf; else echo "Checking of zone files is disabled"; fi (code=exited, status=0/SUCCESS) Main PID: 27 (khugepaged) CGroup: /system.slice/docker-47d2c93035cafb1174dedd924cfa4308ebb8b924bc0b0b661f60ffee3e4bf715.scope/system.slice/named.service └─2788 /usr/sbin/named -u named ‣ 27 [khugepaged]
Be sure to note, if you stop the service running in the container with systemctl
, that doesn’t stop the container. We’re directly manipulating the service in these commands not the container. You could use systemctl -M halt
to stop the container.
CREATING SYSTEMD UNIT
Now that we’ve looked at manually manipulating the service from systemctl
, the only thing left is to start the container when the host starts, so we’re never without DNS. We’ll create a simple unit file for the service, and then enable it on the host.
[root@rhel7-host ~]# vi named-container.service [Unit] Description=Containerized BIND service Requires=docker.service After=docker.service [Service] Restart=on-failure RestartSec=10 ExecStart=/usr/bin/docker start -a %p ExecStop=-/usr/bin/docker stop -t 2 %p [Install] WantedBy=multi-user.target
Since the name of the unit matches the name of the container we built, we use the %p
specifier to pass the right argument. In this case, that’s named-container
. If you wanted different names for the unit and container, you could replace the %p
with the name of the container. Copy the unit file to /etc/systemd/system
and we can manipulate the container like a service on the host.
Let’s use restart
so we know the container is managed by systemd and not already running from a previous step, then we can check the status of BIND inside the container.
[root@rhel7-host bind]# cp named-container.service /etc/systemd/system/ [root@rhel7-host bind]# systemctl enable named-container Created symlink from /etc/systemd/system/multi-user.target.wants/named-container.service to /etc/systemd/system/named-container.service. [root@rhel7-host bind]# systemctl restart named-container [root@rhel7-host bind]# systemctl status named-container named-container.service - Containerized BIND service Loaded: loaded (/etc/systemd/system/named-container.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2017-05-10 16:45:44 EDT; 26s ago Main PID: 23455 (docker-current) Memory: 5.9M CGroup: /system.slice/named-container.service └─23455 /usr/bin/docker-current start -a named-container May 10 16:45:44 rhel7-host systemd[1]: Started Containerized BIND service. May 10 16:45:44 rhel7-host systemd[1]: Starting Containerized BIND service... [root@rhel7-host bind]# systemctl -M 47d2c93035cafb1174dedd924cfa4308 status named named.service - Berkeley Internet Name Domain (DNS) Loaded: loaded (/usr/lib/systemd/system/named.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2017-05-10 16:44:52 EDT; 1min 49s ago Process: 26 ExecStart=/usr/sbin/named -u named $OPTIONS (code=exited, status=0/SUCCESS) Process: 24 ExecStartPre=/bin/bash -c if [ ! "$DISABLE_ZONE_CHECKING" == "yes" ]; then /usr/sbin/named-checkconf -z /etc/named.conf; else echo "Checking of zone files is disabled"; fi (code=exited, status=0/SUCCESS) Main PID: 27 (khugepaged) CGroup: /system.slice/docker-47d2c93035cafb1174dedd924cfa4308ebb8b924bc0b0b661f60ffee3e4bf715.scope/system.slice/named.service └─23438 /usr/sbin/named -u named ‣ 27 [khugepaged]
There we have it, a containerized BIND service that can be updated independently of any other services of the system, yet easily managed and updated from the host.
LOOKING AHEAD
This concept of a system service container isn’t quite the same as what some others are calling “system containers”. System containers are simply stand alone containers that provide services. The distinction isn’t between apps and OS containers, but if the application in the container benefits from distribution and orchestration. Typically, BIND doesn’t need to scale up and down based on usage, so it wouldn’t benefit from a full orchestration configuration.
System containers are something we think can be very useful. In the Container Catalog you'll find some supported examples, like etcd. This container uses the atomic
command found on the Red Hat Enterprise Linux Atomic Host to install and configure the service. These system containers are built somewhat differently than we did here, using runc
to launch the container. There are definite benefits to the approach and you can read more about the install and run process for etcd in the official documentation.
If you'd like to see how the upstream work is progressing on creating and managing these sorts of system containers, you can take a look at the Project Atomic repository.
Additional Resources:
GitHub repository for accompanying files: https://github.com/nzwulfin/named-container
About the author
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit