[libvirt] [PATCH 2/2] docs: add page about virtlockd setup

Daniel P. Berrange berrange at redhat.com
Mon Feb 16 11:59:37 UTC 2015


Introduce some basic docs describing the virtlockd setup.

Signed-off-by: Daniel P. Berrange <berrange at redhat.com>
---
 docs/locking-lockd.html.in | 160 +++++++++++++++++++++++++++++++++++++++++++++
 docs/locking.html.in       |   2 +-
 docs/sitemap.html.in       |   4 ++
 3 files changed, 165 insertions(+), 1 deletion(-)
 create mode 100644 docs/locking-lockd.html.in

diff --git a/docs/locking-lockd.html.in b/docs/locking-lockd.html.in
new file mode 100644
index 0000000..2b01ee5
--- /dev/null
+++ b/docs/locking-lockd.html.in
@@ -0,0 +1,160 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+  <body>
+    <h1>Virtual machine lock manager, virtlockd plugin</h1>
+
+    <ul id="toc"></ul>
+
+    <p>
+      This page describes use of the <code>virtlockd</code>
+      service as a <a href="locking.html">lock driver</a>
+      plugin for virtual machine disk mutual exclusion.
+    </p>
+
+    <h2><a name="background">virtlockd background</a></h2>
+
+    <p>
+      The virtlockd daemon is a single purpose binary which
+      focus exclusively on the task of acquiring and holding
+      locks on behalf of running virtual machines. It is
+      designed to offer a low overhead, portable locking
+      scheme can be used out of the box on virtualization
+      hosts with minimal configuration overheads. It makes
+      use of the POSIX fcntl advisory locking capability
+      to hold locks, which is supported by the majority of
+      commonly used filesystems
+    </p>
+
+    <h2><a name="sanlock">virtlockd daemon setup</a></h2>
+
+    <p>
+      In most OS, the virtlockd daemon itself will not require
+      any upfront configuration work. It is installed by default
+      when libvirtd is present, and a systemd socket unit is
+      registered such that the daemon will be automatically
+      started when first required. With OS that predate systemd
+      though, it will be neccessary to start it at boot time,
+      prior to libvirtd being started. On RHEL/Fedora distros,
+      this can be achieved as follows
+    </p>
+
+    <pre>
+      # chkconfig virtlockd on
+      # service virtlockd start
+    </pre>
+
+    <p>
+      The above instructions apply to the instance of virtlockd
+      that runs privileged, and is used by the libvirtd daemon
+      that runs privileged. If running libvirtd as an unprivileged
+      user, it will always automatically spawn an instance of
+      the virtlockd daemon unprivileged too. This requires no
+      setup at all.
+    </p>
+
+    <h2><a name="lockdplugin">libvirt lockd plugin configuration</a></h2>
+
+    <p>
+      Once the virtlockd daemon is running, or setup to autostart,
+      the next step is to configure the libvirt lockd plugin.
+      There is a separate configuration file for each libvirt
+      driver that is using virtlockd. For QEMU, we will edit
+      <code>/etc/libvirt/qemu-lockd.conf</code>
+    </p>
+
+    <p>
+      The default behaviour of the lockd plugin is to acquire locks
+      directly on the virtual disk images associated with the guest
+      <disk> elements. This ensures it can run out of the box
+      with not configuration, providing locking for disk images on
+      shared filesystems such as NFS. It does not provide any cross
+      host protection for storage that is backed by block devices,
+      since locks acquired on device nodes in /dev only apply within
+      the host. It may also be the case that the filesystem holding
+      the disk images is not capable of supporting fcntl locks.
+    </p>
+
+    <p>
+      To address these problems it is possible to tell lockd to
+      acquire locks on an indirect file. Essentially lockd will
+      calculate the SHA256 checksum of the fully qualified path,
+      and create a zero length file in a given directory whose
+      filename is the checksum. It will then acquire a lock on
+      that file. Assuming the block devices assigned to the guest
+      are using stable paths (eg /dev/disk/by-path/XXXXXXX) then
+      this will allow for locks to apply across hosts. This
+      feature can be enabled by setting a configuration setting
+      that specifies the directory in which to create the lock
+      files. The directory referred to should of course be
+      placed on a shared filesystem (eg NFS) that is accessible
+      to all hosts which can see the shared block devices.
+    </p>
+
+    <pre>
+      $ su - root
+      # augtool -s set \
+        /files/etc/libvirt/qemu-lockd.conf/file_lockspace_dir \
+        "/var/lib/libvirt/lockd/files"
+    </pre>
+
+    <p>
+      If the guests are using either LVM and SCSI block devices
+      for their virtual disks, there is a unique identifier
+      associated with each device. It is possible to tell lockd
+      to use this UUID as the basis for acquiring locks, rather
+      than the SHA256 sum of the filename. The benefit of this
+      is that the locking protection will work even if the file
+      paths to the given block device are different on each
+      host.
+    </p>
+
+    <pre>
+      $ su - root
+      # augtool -s set \
+        /files/etc/libvirt/qemu-lockd.conf/scsi_lockspace_dir \
+        "/var/lib/libvirt/lockd/scsi"
+      # augtool -s set \
+        /files/etc/libvirt/qemu-lockd.conf/lvm_lockspace_dir \
+        "/var/lib/libvirt/lockd/lvm"
+    </pre>
+
+    <p>
+      It is important to remember that the changes made to the
+      <code>/etc/libvirt/qemu-lockd.conf</code> file must be
+      propagated to all hosts before any virtual machines are
+      launched on them. This ensures that all hosts are using
+      the same locking mechanism
+    </p>
+
+    <h2><a name="qemuconfig">QEMU/KVM driver configuration</a></h2>
+
+    <p>
+      The QEMU driver is capable of using the virtlockd plugin
+      since the release <span>1.0.2</span>.
+      The out of the box configuration, however, currently
+      uses the <strong>nop</strong> lock manager plugin.
+      To get protection for disks, it is thus necessary
+      to reconfigure QEMU to activate the <strong>lockd</strong>
+      driver. This is achieved by editing the QEMU driver
+      configuration file (<code>/etc/libvirt/qemu.conf</code>)
+      and changing the <code>lock_manager</code> configuration
+      tunable.
+    </p>
+
+    <pre>
+      $ su - root
+      # augtool -s  set /files/etc/libvirt/qemu.conf/lock_manager lockd
+      # service libvirtd restart
+    </pre>
+
+    <p>
+      Every time you start a guest, the virtlockd daemon will acquire
+      locks on the disk files directly, or in one of the configured
+      lookaside directories based on SHA256 sum. To check that locks
+      are being acquired as expected, the <code>lslocks</code> tool
+      can be run.
+    </p>
+
+  </body>
+</html>
diff --git a/docs/locking.html.in b/docs/locking.html.in
index cf72eb0..c25ae85 100644
--- a/docs/locking.html.in
+++ b/docs/locking.html.in
@@ -28,7 +28,7 @@
         nothing. This can be used if mutual exclusion between
         virtual machines is not required, or if it is being
         solved at another level in the management stack.</dd>
-      <dt>lockd</dt>
+      <dt><a href="locking-lockd.html">lockd</a></dt>
       <dd>This is the current preferred implementation shipped
         with libvirt. It uses the <code>virtlockd</code> daemon
         to manage locks using the POSIX fcntl() advisory locking
diff --git a/docs/sitemap.html.in b/docs/sitemap.html.in
index 5b911ce..a9fa945 100644
--- a/docs/sitemap.html.in
+++ b/docs/sitemap.html.in
@@ -103,6 +103,10 @@
                 <span>Ensuring exclusive guest access to disks</span>
                 <ul>
                   <li>
+                    <a href="locking-lockd.html">virtlockd</a>
+                    <span>virtlockd lock manager plugin</span>
+                  </li>
+                  <li>
                     <a href="locking-sanlock.html">Sanlock</a>
                     <span>Sanlock lock manager plugin</span>
                   </li>
-- 
2.1.0




More information about the libvir-list mailing list