Skip to main content

How to move MediaWiki into a Linux container

In this next installment of this moving services to Linux containers series, we look at how MediaWiki can operate in a containerized environment.
Moving MediaWiki to Container
Image by kconcha from Pixabay

I'd known that I wanted to containerize some of my personal Linux services for a long time. Even though I have a great deal of containerization experience, I just never seemed to get around to working on my own applications. I finally did it, and I'm glad!

The first article in this series introduced the services I containerized. It also discussed some of the pitfalls. I considered lift-and-shift, refactoring, and rewriting options. I also gave the applications easy/moderate/difficult ratings. I then covered in part two my experience with containerizing WordPress.

Here, we tackle MediaWiki, which will be similar since it's also an Apache, PHP FPM, PHP-based service.

Editor's Note: For the purpose of this article, we assume you'll be building your containers on Red Hat Enterprise Linux 8 using podman build. You may be able to use the instructions on other distributions or with other toolchains, however, some modifications may be required.

Moving MediaWiki


MediaWiki runs in a container image built from the exact same Containerfile. Notice one small thing not mentioned in the WordPress section—we install crontabs and Cronie. Unlike WordPress, which has an advanced backup utility, with MediaWiki, we must dump the MariaDB database to get backups, so we need cron.

MAINTAINER fatherlinux <>
RUN yum install -y mariadb-server mariadb php php-apcu php-intl php-mbstring php-xml php-json php-mysqlnd crontabs cronie iputils net-tools;yum clean all
RUN systemctl enable mariadb
RUN systemctl enable httpd
RUN systemctl disable systemd-update-utmp.service
ENTRYPOINT ["/sbin/init"]
CMD ["/sbin/init"]

Other than the utilization of cron, Mediawiki does not rely on anything special in the httpd-php container image.

[ Readers also liked: Rootless containers using Podman ]


Now, let's take a look at how we run MediaWiki slightly different than WordPress:

Description=Podman container -

ExecStart=/usr/bin/podman run -i --read-only --rm -p 8080:80 --name \
-v /srv/ \
-v /srv/ \
-v /srv/ \
-v /srv/ \
-v /srv/ \
-v /srv/ \
-v /srv/ \
-v /srv/ \
-v /srv/ \
-v /srv/ \
--tmpfs /etc \
--tmpfs /var/log/ \
--tmpfs /var/tmp \
ExecStop=/usr/bin/podman stop -t 3
ExecStopPost=/usr/bin/podman rm -f


We run the container with –read-only and –rm, just like WordPress, making it ephemeral. Notice that we bind mount code/mediawiki read-only as well. We could have built another layered image and embedded the MediaWiki code into that layer, but we decided to bind mount it instead. Many PHP apps use a pattern like WordPress, where the code directory is expected to be writable at runtime. This design decision purposefully gives us the option to make the code directory read-only or writable depending on the PHP web application we are putting in a container. The same httpd-php image can be used for all of them, thereby reducing the size of our software supply chain. If we update Glibc, OpenSSL, Apache, PHP FPM, or PHP to fix security issues, all of our PHP applications inherit the new configuration when they are restarted. In a perfect world, we would continuously rebuild this httpd-php image in a CI/CD system with a good test harness for continual updates.

The configuration files, like WordPress, are bind-mounted into the container read-only at runtime. Again, this is a great security upgrade from a standard LAMP server.

There are more data directories bind-mounted into MediaWiki. Here's why:

  • data/mariadb – This is straightforward. The reasons are identical to WordPress.
  • data/images – Stores images, PDFs, and other files uploaded into the wiki.
  • data/skins – Like WordPress, MediaWiki was designed before containers. They could never know the needs of future technologies like containers. Unlike WordPress, MediaWiki comes with pre-populated skins in this directory, which is in the code/mediawiki/skins directory. This is a copy of that data combined with our custom skins. It's bind mounted read/write so that we can add new skins if we like. In the future, this will likely be solved with a "-v skins:skins:o" overlay option to Podman. This will allow us to "overlay" our custom data on top of the existing code/mediawiki/skins data that comes with the initial code download.
  • data/logs – Like WordPress, we want access to our logs outside of the container.
  • data/backups – Unlike WordPress, we must use a cron job to dump the MariaDB database on a schedule. Those backups are put in this directory, then copied off-site by the container host.

[ Free Guide: How to explain DevOps in plain English

​Wrap up

So, that's the second service - MediaWiki! Perhaps a little more challenging than WordPress, but nothing you can't handle. In this case, I added cronie configurations. It's also apparent how important systemd settings are.

Don't forget to look at back at containerizing WordPress if you haven't already. Next up, we'll cover Request Tracker containerization.

This series is based on "A Hacker's Guide to Moving Linux Services into Containers" on and is republished with permission.

Check out these related articles on Enable Sysadmin

Topics:   Linux   Linux administration   Containers  
Author’s photo

Scott McCarty

At Red Hat, Scott McCarty is a technical product manager for the container subsystem team, which enables key product capabilities in OpenShift Container Platform and Red Hat Enterprise Linux. Focus areas include container runtimes, tools, and images. More about me

Red Hat Summit 2022: On Demand

Get the latest on Ansible, Red Hat Enterprise Linux, OpenShift, and more from our virtual event on demand.

Related Content


Privacy Statement