Most of my work is exploring the software made available in Red Hat Enterprise Linux (RHEL) and talking to Red Hat’s solution architects, engineers, and various users (and non-users) about what they're implementing. With the release of the RHEL 8 Beta, I have a new set of things to learn, just like everyone else.
I learn things better in a hands-on way, and thought I'd do this in the open to get (hopefully) feedback and improve what I'm doing. The more realistic a target I can create, the better it is for learning. Looking at examples in our industry, there's a lot of contrived showcases. I thought I'd create an application infrastructure using the Red Hat Enterprise Linux 8 Beta that I see is fairly common but still something I know about.
The target
When I reach for a programming language to solve a problem, I reach for Python. Not saying that's good or bad, but it's what I'm used to. The combination of Django and PostgreSQL is a fairly common pattern for applications, so that's my goal: configuring RHEL 8 Beta for Python 3, Django, Nginx, Gunicorn, and PostgreSQL to serve an application.
The setup
The test environment will change, since I know I want to look at automation, containers, and multiple server setups. When I'm working on new projects, I start with a manual prototype so I can see what needs to happen, how things interact, and then move on to automation and more complex configs. Here we’ll look at the manual prototyping I did.
Let's start with a single RHEL 8 Beta VM. You can install the VM from scratch or use the KVM guest image available via the Beta subscription. If you use the guest image, you'll need to set up a CD that contains the meta-data and user-data for cloud-init. I'm not doing anything interesting with the disk layout, or available packages, so any install will work.
Let's get down to the details.
Getting to Django
I want to use the latest version of Django, so I'll need a Python 3.5 or later virtual environment (virtualenv). Reading the Beta notes, I can see that Python 3.6 is available so let's check out the system.
[cloud-user@8beta1 ~]$ python -bash: python: command not found [cloud-user@8beta1 ~]$ python3 -bash: python3: command not found
Red Hat is highly Python heavy for system tooling in RHEL, so what's happening here?
A lot of Python developers are still on the cusp of moving to Python 3 from Python 2, and Python 3 is under active development with new minor versions. So, to maintain the need for stable system tools while providing access to multiple versions of Python for users, we moved the system Python to a new package and have both 2.7 and 3.6 available for install. To see more on the change and rationale, check out this blog post from Langdon White.
OK then, to get Python I only need to install two packages, python3-pip will get pulled in as a dependency.
sudo yum install python36 python3-virtualenv
Why not use the direct module call like Langdon suggested and not install pip3? Thinking ahead about automation, I know that Ansible will need pip installed since the pip module doesn't support virtualenvs with a custom pip executable.
With a working python3 interpreter, we can move on to getting Django up and running with our other components. There are many ways to do this online, but this is how I do it. Feel free to substitute your process here.
I know I want to use the default versions of PostgreSQL and Nginx available in RHEL 8, so I can install those with Yum.
sudo yum install nginx postgresql-server
We need psycopg2 for PostgreSQL, but I only want to make that available in the application virtualenv, so I'll install that using pip3 along with Django and Gunicorn. But first we need to set up that virtualenv.
There's always discussion about where to install Django projects, but when in doubt I turn to the Linux Filesystem Hierarchy Standard. The FHS says that /srv
is for:
"Site-specific data served by this system, such as data and scripts for web servers, data offered by FTP servers, and repositories for version control systems (appeared in FHS-2.3 in 2004)."
That fits the bill so we'll put everything we need in a directory under /srv
that our application user (cloud-user) owns.
sudo mkdir /srv/djangoapp sudo chown cloud-user:cloud-user /srv/djangoapp cd /srv/djangoapp virtualenv django source django/bin/activate pip3 install django gunicorn psycopg2 ./django-admin startproject djangoapp /srv/djangoapp
Setting up PostgreSQL and Django is straightforward: create the database, create the user, and set up permissions. One thing to note during the initial setup of PostgreSQL is the postgresql-setup script shipped with the postgresql-server package. This script can help with basic database cluster administration tasks, like initialization or upgrades. For setting up a new PostgreSQL instance on a RHEL system, we'll run:
sudo /usr/bin/postgresql-setup --initdb
Then we can start PostgreSQL with systemd, create the database, and set up the project in Django. Remember to restart PostgreSQL after you make changes to the client authentication configuration file (usually pg_hba.conf)
to set up host password authentication for the application user. I spent far too much time chasing down that problem. If you have other issues, make sure you changed the IPv4 and IPv6 entries in the pg_hba.conf
.
systemctl enable --now postgresql sudo -u postgres psql postgres=# create database djangoapp; postgres=# create user djangouser with password 'qwer4321'; postgres=# alter role djangouser set client_encoding to 'utf8'; postgres=# alter role djangouser set default_transaction_isolation to 'read committed'; postgres=# alter role djangouser set timezone to 'utc'; postgres=# grant all on DATABASE djangoapp to djangouser; postgres=# \q
In /var/lib/pgsql/data/pg_hba.conf
:
# IPv4 local connections: host all all 0.0.0.0/0 md5 # IPv6 local connections: host all all ::1/128 md5
In /srv/djangoapp/settings.py
:
# Database DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': '{{ db_name }}', 'USER': '{{ db_user }}', 'PASSWORD': '{{ db_password }}', 'HOST': '{{ db_host }}', } }
Once you've got the project settings.py configured and the database configured, you can run the development server to check your work. Creating an admin user after you've started the development server is a good way to test if the database connection is working.
./manage.py runserver 0.0.0.0:8000 ./manage.py createsuperuser
WSGI Tango?
The development server is useful for testing, but to run an app we'll want to set up a Web Server Gateway Interface (WSGI) server and a proxy. There are a few common combinations, like Apache HTTPD with uWSGI and Nginx and Gunicorn. We'll use the latter for this environment for no reason other than I've done it before.
A Web Server Gateway Interface server forwards requests from a web server to a Python web framework. WSGI came from the bad old days of CGI API proliferation, and is pretty much standard no matter what web server or Python framework you choose. Although it's common, there's still a lot of nuance and choice when you work with these frameworks. For us, we'll just look at getting Gunicorn and Nginx to talk over a socket.
Since we're looking at a single server hosting both components, I'm going to investigate using a UNIX socket instead of a network socket. Since we need a socket for communication, let's also go a step further and make the Gunicorn socket activated in systemd.
Socket activated services are pretty straightforward. First we create a unit file that creates a ListenStream where we want the UNIX socket made, then we create a service unit file that Requires the socket unit. The service unit file just needs to call Gunicorn from the virtualenv and create the WSGI binding between the UNIX socket and the Django application.
Here are some sample unit files for your consideration. First, we'll set up the socket.
[Unit] Description=Gunicorn WSGI socket [Socket] ListenStream=/run/gunicorn.sock [Install] WantedBy=sockets.target
Next, we'll set up the Gunicorn daemon.
[Unit] Description=Gunicorn daemon Requires=gunicorn.socket After=network.target [Service] User=cloud-user Group=cloud-user WorkingDirectory=/srv/djangoapp ExecStart=/srv/djangoapp/django/bin/gunicorn \ --access-logfile - \ --workers 3 \ --bind unix:gunicorn.sock djangoapp.wsgi [Install] WantedBy=multi-user.target
For Nginx, it's just a matter of creating the proxy configs and setting the static content directory if you made one. In RHEL, the config files for Nginx live in /etc/nginx/conf.d
. You can drop the example in as /etc/nginx/conf.d/default.conf
, and start the service. Be sure to set the server_name to what matches your host.
server { listen 80; server_name 8beta1.example.com; location = /favicon.ico { access_log off; log_not_found off; } location /static/ { root /srv/djangoapp; } location / { proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://unix:/run/gunicorn.sock; } }
Start the Gunicorn socket and Nginx with systemd, and we're ready to test.
Bad Gateway?
When you hit the URL with a web browser, you probably get a 502 Bad Gateway error. This could be standard permissions issues on the UNIX socket, or more complex access control problems with SELinux.
In the nginx error log you may see the following.
2018/12/18 15:38:03 [crit] 12734#0: *3 connect() to unix:/run/gunicorn.sock failed (13: Permission denied) while connecting to upstream, client: 192.168.122.1, server: 8beta1.example.com, request: "GET / HTTP/1.1", upstream: "http://unix:/run/gunicorn.sock:/", host: "8beta1.example.com"
If we test Gunicorn directly, we get an empty reply.
curl --unix-socket /run/gunicorn.sock 8beta1.example.com
Why? If you look at the journal, SELinux is probably the culprit. Since we're running a daemon that doesn't have a policy, it gets labeled as init_t. So let's test our theory.
sudo setenforce 0
Yes, yes, I know. I'm making Dan Walsh cry, and will get angry emails from co-workers around the world. But we're troubleshooting a prototype. We're just going to disable it for a moment to test and make sure that's the real cause. So go ahead, and do it, but we're going to change it back after we test!
Refresh your browser or re-run our curl command and make sure we get the Django test page.
Once you've confirmed you get the page and there's no other permissions issues, re-enable SELinux.
sudo setenforce 1
No really, do it. I'll wait.
You're probably expecting me to launch into an explanation of audit2allow and creating policy from alerts using sepolgen. I'm not. At this point, we don't have a real Django application, and we're not going to have a full map of what Gunicorn may or may not want to access. But we need a way to leave SELinux enforcing and protecting the system, while allowing our application to run and provide an audit trail so we can build a real policy later.
Enter permissive domains
You may not have heard of SELinux permissive domains, but they aren't new. You may have even used one without knowing it. In fact, when you create a policy from audit messages, the generated policy will be a permissive domain. We're going to look at creating the simplest permissive policy we can.
To create a specific permissive domain for Gunicorn, we'll need a policy and to label some files to match. We also need the tools to make compile new policies.
sudo yum install selinux-policy-devel
Permissive domains are excellent troubleshooting tools, especially when dealing with a custom application or one that doesn't have a shipped policy. The permissive domain policy for Gunicorn will be as simple as we can make it. We declare the main type (gunicorn_t), declare a type we'll use to label a few executables (gunicorn_exec_t), and then set up the transition for systemd to label the running processes correctly. The last line sets the policy up as permissive by default at the time it’s loaded.
gunicorn.te:
policy_module(gunicorn, 1.0) type gunicorn_t; type gunicorn_exec_t; init_daemon_domain(gunicorn_t, gunicorn_exec_t) permissive gunicorn_t;
We can compile this policy file and add it to our system.
make -f /usr/share/selinux/devel/Makefile sudo semodule -i gunicorn.pp sudo semanage permissive -a gunicorn_t sudo semodule -l | grep permissive
Let's take a look at anything else SELinux may be blocking, other than everything our unknown daemon wants to touch.
sudo ausearch -m AVC type=AVC msg=audit(1545315977.237:1273): avc: denied { write } for pid=19400 comm="nginx" name="gunicorn.sock" dev="tmpfs" ino=52977 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file permissive=0
SELinux is stopping Nginx from writing to the UNIX socket that Gunicorn uses. Normally we'd start adjusting policies, but we know there's more work to do. We can also set an existing enforcing domain to be a permissive domain. So let's move httpd_t to permissive as well. That gives Nginx the access it needs but we can continue working and troubleshooting.
sudo semanage permissive -a httpd_t
OK, now that SELinux is enforcing (really, you don't want to ship this configured with SELinux in permissive mode) and our permissive domains loaded, we need to figure out what we need to label as gunicorn_exec_t to get everything working again. Hit the website to create more denials.
sudo ausearch -m AVC -c gunicorn
We can see there's a lot of messages with 'comm="gunicorn"' doing various actions on files under /srv/djangoapp
, so that command is clearly a candidate for labeling.
But there's also this message:
type=AVC msg=audit(1545320700.070:1542): avc: denied { execute } for pid=20704 comm="(gunicorn)" name="python3.6" dev="vda3" ino=8515706 scontext=system_u:system_r:init_t:s0 tcontext=unconfined_u:object_r:var_t:s0 tclass=file permissive=0
If we look at the status of the gunicorn service or check ps
, we don't have any running processes. It looks like gunicorn is trying to call the Python interpreter in our virtualenv, perhaps to start up workers. So for now, let's label these two binaries and see if we get our Django test page.
chcon -t gunicorn_exec_t /srv/djangoapp/django/bin/gunicorn /srv/djangoapp/django/bin/python3.6
You'll need to restart the gunicorn service to pick up the new label. You can either restart it directly or stop the service and let the socket start it when you hit the website with a browser. Check the processes got the right labels with ps
.
ps -efZ | grep gunicorn
Don't forget to go back and create a real SELinux policy!
If you pull up the AVCs that get logged now, you should see that the last item says permissive=1 for anything related to our application, and permissive=0 for the rest of the system. We can find a better way to fix any issues once we understand all of the real access our app might need. But until then, the system is better protected and we get usable auditing for our Django project.
sudo ausearch -m AVC
It works!
Now we have a working Django installation, fronted by an Nginx proxy with a Gunicorn WSGI server. We’ve set up Python 3 and PostgreSQL 10 from the RHEL 8 Beta repositories. From here, you can move on to creating (or just deploying) your Django application or exploring other tools available in RHEL 8 Beta to automate this setup, improve performance, or even look at containerizing this configuration.
Über den Autor
Mehr davon
Nach Thema durchsuchen
Automatisierung
Das Neueste zum Thema IT-Automatisierung für Technologien, Teams und Umgebungen
Künstliche Intelligenz
Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen
Open Hybrid Cloud
Erfahren Sie, wie wir eine flexiblere Zukunft mit Hybrid Clouds schaffen.
Sicherheit
Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren
Edge Computing
Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen
Infrastruktur
Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen
Anwendungen
Entdecken Sie unsere Lösungen für komplexe Herausforderungen bei Anwendungen
Original Shows
Interessantes von den Experten, die die Technologien in Unternehmen mitgestalten
Produkte
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud-Services
- Alle Produkte anzeigen
Tools
- Training & Zertifizierung
- Eigenes Konto
- Kundensupport
- Für Entwickler
- Partner finden
- Red Hat Ecosystem Catalog
- Mehrwert von Red Hat berechnen
- Dokumentation
Testen, kaufen und verkaufen
Kommunizieren
Über Red Hat
Als weltweit größter Anbieter von Open-Source-Software-Lösungen für Unternehmen stellen wir Linux-, Cloud-, Container- und Kubernetes-Technologien bereit. Wir bieten robuste Lösungen, die es Unternehmen erleichtern, plattform- und umgebungsübergreifend zu arbeiten – vom Rechenzentrum bis zum Netzwerkrand.
Wählen Sie eine Sprache
Red Hat legal and privacy links
- Über Red Hat
- Jobs bei Red Hat
- Veranstaltungen
- Standorte
- Red Hat kontaktieren
- Red Hat Blog
- Diversität, Gleichberechtigung und Inklusion
- Cool Stuff Store
- Red Hat Summit