In OpenStack jargon, an Instance is a Virtual Machine, the guest workload. It boots from an operating system image, and it is configured with a certain amount of CPU, RAM and disk space, amongst other parameters such as networking or security settings.

In this blog post kindly contributed by Marko Myllynen we’ll explore nine configuration and optimization options that will help you achieve the required performance, reliability and security that you need for your workloads.

Some of the optimizations can be done inside a guest regardless of what has the OpenStack Cloud Administrator enabled in your cloud. However, more advanced options require prior enablement and, possibly, special host capabilities. This means many of the options described here will depend on how the Administrator configured the cloud, or may not be available for some tenants as they are reserved for certain groups. More information about this subject can be found on the Red Hat Documentation Portal and its comprehensive guide on OpenStack Image Service. Similarly, the upstream OpenStack documentation has some extra guidelines available.

The following configurations should be evaluated for any VM running on any OpenStack environment. These changes have no side-effects and are typically safe to enable even if unused

openstack-libvirt-images

1) Image Format: QCOW or RAW?

OpenStack storage configuration is an implementation choice by the Cloud Administrator, often not fully visible to the tenant. Storage configuration may also change over the time without explicit notification by the Administrator, as he/she adds capacity with different specs.

When creating a new instance on OpenStack, it is based on a Glance image. The two most prevalent and recommended image formats are QCOW2 and RAW. QCOW2 images (from QEMU Copy On Write) are typically smaller in size. For instance a server with a 100 GB disk, the size of the image in RAW format, might be only 10 GBs when formatted into QCOW2. Regardless of the format, it is a good idea to process images before uploading them to Glance with virt-sysprep(1) and virt-sparsify(1).

The performance of QCOW2 depends on both the hypervisor kernel and the format version, the latest being QCOW2v3 (sometimes referred to as QCOW3) which has better performance than the earlier QCOW2, almost as good as RAW format. In general we assume RAW has better overall performance despite the operational drawbacks (like the lack of snapshots) or the increase in time it takes to upload or boot (due to its bigger size). Our latest versions of Red Hat OpenStack Platform automatically use the newer QCOW2v3 format (thanks to the recent RHEL versions) and it is possible to check and also convert between RAW and older/newer QCOW2 images with qemu-img(1).

OpenStack instances can either boot from a local image or from a remote volume. That means

  • Image-backed instances benefit significantly by the performance difference between older QCOW2 vs QCOW2v3 vs RAW.
  • Volume-backed instances can be created either from QCOW2 or RAW Glance images. However, as Cinder backends are vendor-specific (Ceph, 3PAR, EMC, etc), they may not use QCOW2 nor RAW. They may have their own mechanisms, like dedup, thin provisioning or copy-on-write. On a particular note, using QCOW2 in Glance with Ceph is not supported (see the ceph documentation and BZ#1383014).

. dashboard

As a general rule of thumb, rarely used images should be stored in Glance as QCOW2, but an image which is used constantly to create new instances (locally stored), or for any volume-backed instances, using RAW should provide better performance despite the sometimes longer initial boot time (except in Ceph-backed systems, thanks to its copy-on-write approach). In the end, any actual recommendation will depend on the OpenStack storage configurations chosen by the Cloud Administrator.

2) Performance Tweaks via Image Extra Properties

Since the Mitaka version, OpenStack allows Nova to automatically optimize certain libvirt and KVM properties on the Compute host to better execute a particular OS in the guest. To provide the guest OS information to Nova, just define the following Glance image properties:

  • os_type=linux # Generic name, like linux or windows
  • os_distro=rhel7.1 # Use osinfo-query os to list supported variants

Additionally, at least for the time being (see BZ#1397120), in order to make sure the newer and more scalable virtio-scsi para-virtualized SCSI controller is used instead of the older virt-blk, the following properties need to be set explicitly:

  • hw_scsi_model=virtio-scsi
  • hw_disk_bus=scsi

All the supported image properties are listed at the Red Hat Documentation portal as well as other CLI options flavor-metadata

3) Prepare for Cloud-init

“Cloud-init” is a package used for early initialization of cloud instances, to configure basics like partition / filesystem size and SSH keys.

Ensure that you have installed the cloud-init and cloud-utils-growpart packages in your Glance image, and that the related services will be executed on boot, to allow the execution of “cloud-init” configurations to the OpenStack VM.

In many cases the default configuration is acceptable but there are lots of customization options available, for details please refer to the cloud-init documentation.

4) Enable the QEMU Guest Agent

On Linux hosts, it is recommended to install and enable the QEMU guest agent which allows graceful guest shutdown and (in the future) automatic freezing of guest filesystems when snapshots are requested, which is a necessary operation for consistent backups (see BZ#1385748):

  • yum install qemu-guest-agent
  • systemctl enable qemu-guest-agent

In order to provide the needed virtual devices and use the filesystem freezing functionality when needed, the following properties need to be defined for Glance images (see also BZ#1391992):

  • hw_qemu_guest_agent=yes # Create the needed device to allow the guest agent to run
  • os_require_quiesce=yes # Accept requests to freeze/thaw filesystems

5) Just in case: how to recover from guest failure

Comprehensive instance fault recovery, high availability, and service monitoring requires a layered approach which as a whole is out of scope for this document. In the paragraphs below we show the options that can be applicable purely inside a guest (which can be thought as being the innermost layer). The most frequently used fault recovery mechanisms for an instance are:

  • recovery from kernel crashes
  • recovery from guest hangs (which do not necessarily involve kernel crash/panic)

In the rare case the guest kernel crashes, kexec/kdump will capture a kernel vmcore for further analysis and reboot the guest. In case the vmcore is not wanted, kernel can be instructed to reboot after a kernel crash by setting the panic kernel parameter, for example “panic=1”.

In order to reboot an instance after other unexpected behavior, for example high load over a certain threshold or a complete system lockup without a kernel panic, the watchdog service can be utilized. Other actions than "reboot" can be found here. The following property needs to be defined for Glance images or Nova flavors.

  • hw_watchdog_action=reset

Then, install the watchdog package inside the guest, then configure the watchdog device, and finally, enable the service:

  • yum install watchdog
  • vi /etc/watchdog.conf
  • systemctl enable watchdog

By default watchdog detects kernel crashes and complete system lockups. See the watchdog.conf(5) man page for more information, e.g., how to add guest health-monitoring scripts as part of watchdog functionality checks.

6) Tune the Kernel

The simplest way to tune a Linux node is to use the “tuned” facility. It’s a service which configures dozens of system parameters according to the selected profile, which in the OpenStack case is “virtual-guest”. For NFV workloads, Red Hat provides a set of NFV tuned profiles to simplify the tuning of network-intensive VMs, .

In your Glance image, it is recommended to install the required package, enable the service on boot, and activate the preferred profile. You can do it by editing the image before uploading to Glance, or as part of your cloud-init recipe:

  • yum install tuned
  • systemctl enable tuned
  • tuned-adm profile virtual-guest

7) Improve networking via VirtIO Multiqueuing

Guest kernel virtio drivers are part of the standard RHEL/Linux kernel package and enabled automatically without any further configuration as needed. Windows guests should also use the official virtio drivers for their particular Windows version, greatly improving network and disk IO performance.

However, recent advanced in Network packet processing in the Linux kernel and also in user-space components created a myriad of extra options to tune or bypass the virtio drivers. Below you'll find an illustration of the virtio device model (from the RHEL Virtualization guide).

Network multiqueuing, or virtio-net multi-queue, is an approach that enables parallel packet processing to scale linearly with the number of available vCPUs of a guest, often providing notable improvement to transfer speeds especially with vhost-user.

Provided that the OpenStack Admin has provisioned the virtualization hosts with supporting components installed (at least OVS 2.5 / DPDK 2.2), this functionality can be enabled by OpenStack Tenant with the following property in those Glance images where we want network multiqueuing:

  • hw_vif_multiqueue_enabled=true

Inside a guest instantiated from such an image, the NIC channel setup can be checked and changed as needed with the commands below:

  • ethtool -l eth0 #to see the current number of queues
  • ethtool -L eth0 combined <nr-of-queues> # to set the number of queues. Should match the number of vCPUs

There is an open RFE to implement multi-queue activation by default in the kernel, see BZ#1396578.

virtio qemu

8) Other Miscellaneous Tuning for Guests

It should go without saying that right-sized instances should contain only the minimum amount of installed packages and run only the services needed. Of a particular note, it is probably a good idea to install and enable the irqbalance service as, although not absolutely necessary in all scenarios, its overhead is minimal and it should be used for example in SR-IOV setups (this way the same image can be used regardless of such lower level details).

Even though implicitly set on KVM, it is a good idea to explicitly add the kernel parameter no_timer_check to prevent issues with timing devices. Enabling persistent DHCP client and disabling zeroconf route in network configuration with PERSISTENT_DHCLIENT=yes and NOZEROCONF=yes, respectively, helps to avoid networking corner case issues.

Guest MTU settings are usually adjusted correctly by default, but having a proper MTU in use on all levels of the stack is crucial to achieve maximum network performance. In environments with 10G (and faster) NICs this typically means the use of Jumbo Frames with MTU up to 9000, taking possible VXLAN encapsulation into account. For further MTU discussion, see the upstream guidelines for MTU or the Red Hat OpenStack Networking Guide.

9) Improving the way you access your instances

Although some purists may consider incompatible running SSH inside truly cloud-native instances, especially in auto-scaling production workloads, most of us will still rely on good old SSH to perform configuration tasks (via Ansible for instance) as well as maintenance and troubleshooting (e.g., to fetch logs after a software failure).

The SSH daemon should avoid DNS lookups to speed up establishing SSH connections. For this, consider using UseDNS no in /etc/ssh/sshd_config and adding OPTIONS=-u0 to /etc/sysconfig/sshd (see sshd_config(5) for details on these). Setting GSSAPIAuthentication no could be considered if Kerberos is not in use. In case instances frequently connect to each other, the ControlPersist / ControlMaster options might be considered as well.

Typically remote SSH access and console access via Horizon are enough for most use cases. During development phase direct console access from the Nova compute host may also be helpful, for this to work enable the serial-getty@ttyS1.service, allow root access via ttyS1 if needed by adding ttyS1 to /etc/securetty, and then access the guest console from the Nova compute with virsh console <instance-id> --devname serial1. console-access


We hope with this blog post you've discovered new ways to improve the performance of your OpenStack instances. If you need more information, remember we have tons of documents in our OpenStack Documentation Portal and that we offer the best OpenStack courses of the industry, starting with the free of charge CL010 Introduction to OpenStack Course.