[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [libvirt] 'build' on FS pool now unconditionally formats?

On 02/26/2010 06:28 PM, Dave Allan wrote:
On 02/26/2010 06:23 AM, Daniel P. Berrange wrote:
On Thu, Feb 25, 2010 at 10:01:09PM -0500, Dave Allan wrote:
On 02/25/2010 08:25 AM, Daniel P. Berrange wrote:
On Thu, Feb 25, 2010 at 01:19:54PM +0100, Daniel Veillard wrote:
On Wed, Feb 24, 2010 at 03:51:45PM -0500, Cole Robinson wrote:
Hi guys,

Looking at the new FS pool build options and talking with Dave, I see
calling PoolBuild on an FS pool now unconditionally calls mkfs.
This is
bad when mixed with virt-manager: previously, we assumed the FS build
was always non destructive (at most it created a directory), so we
called it
every time, and didn't even allow users to opt out, since there
wasn't a
case that called for it.

This new formatting behavior really needs to be opt in, otherwise all
virt-manager versions creating an FS pool can destroy data.

Just FYI, for disk pools (and certain LVM configurations) where this
has always been destructive, we default to build=off, and loudly warn
the user
if they choose otherwise. We can do that with this new option as
but the
previous behavior really needs to be reinstated IMO (and before
the new

I fully accept that this could be a bug in virt-manager's
assumptions of
build command, but even consider a virsh user: previously build just
created a
directory, now it formats a partition, without any XML change.

I was initially reluctant of changing the behaviour, and asked to
use a
flag to keep the original default semantic. I got convinced that noone
could rely on it because the function was basically incomplete. But
virt-manager ships with an expectation on the previous behaviour, I
revert my position, we need to add a _FORMAT = 4 flag for this call
only call mkfs if that flag is passed. Fix is trivial we should not
push 0.7.7 without it,

I agree, we should not push 0.7.7 with the code the way it is now, but
what I'm describing below is also trivial, so it wouldn't push back the

I really don't want to add an extra flag, because it makes filesystem
pool a special case. The 'build' operation is intentionally destructive
by its very definition, and virt-mnager should never be expecting it to
be safe to call on specific pool types.

The problem is that what build means has never been defined, and while
it may have been the intention to implement only destructive operations,
the backends implement a variety of actions. For some backends build is
is easy to reverse; others not. The only guidance virsh help gives is
"build a pool" which doesn't indicate any danger at all. I would define
build as "make the changes to the media necessary to start the pool" and
split those changes into destructive and non-destructive actions with a
flag. (see below)

The distinction of destructive vs non-destructive does not make sense
at an API level. The build operation is intended to do whatever data
formatting changes are required in order to construct the pool. Whether
this is destructive or not is really an artifact of whether there is any
data already on the storage that you already care about.

This is why I think the build() operation, with flags=0, should refuse
to run if it detects that the pool already has been constructed. This
will protect against data loss with virt-manager's current usage, and
also avoid changing the semantics of the current LVM/disk backends.

Adding a OVERWRITE flag will then allow apps to explicitly ignore the
checks for existing data, allowing possibly destructive operation.

IMHO, we should do two things to address this

- Fix virt-manager to not call build all the time for any pool
type - it must only do it when expkicitly requested

- Make the 'build' operation check to see if the pool is
already constructed (eg LVM magic check for logical pools,
FAT partition check for disk ools& filesystem magic check
for the fs pool). Reject the build operation if any of these
show that the pool exists / is alread ybuilt
- Add a 'OVERWRITE' flag, to allow apps to forcably reformat,
regardless of current state

I propose we add a DESTRUCTIVE flag and require it for destructive
operations on all the backends. The downside, obviously, is that it
changes the behavior of the disk and LVM backends that currently don't
require a flag for destructive operations. I'm not too worried about
that behavioral change, though, because what's in the tree right now
changes the behavior of the fs backend making a previously
non-destructive operation into a destructive operation without any
change on the users part and without warning.

I don't want us to be changing the semantics of the existing LVM/disk
backends here. In terms of our release schedule, I think we should just
revert the current change to the FS backend completely. Then make the
release. Then re-apply the change, with some followup patches to add
the checks for existing formatted data in the next release.

Ok, I reverted the original patch as we're all in agreement on that.

The utility of the check is dependent on what's on the disk. If the user
specifies the wrong partition, which is not hard to do, for example, a
choosing a database partition instead of the intended filesystem in the
virt-manager gui, the partition will still be overwritten without
warning, whereas other overwrite operations are, as Cole says, loudly
warned about. The only way to prevent that is to require the flag before
any fs format. It's somewhat gross, I freely admit, but I feel fairly
strongly that we need to err on the side of caution when dealing with

I'm ok with leaving the other backends the way they are.


Now that 0.7.7 is done, we need to revisit fs pool building.

Because it is impossible to implement a check for arbitrary existing data, the only safe option is to require the overwrite flag in all cases. If we do not require the flag in all cases, virt-manager and virsh will format unknown partitions without providing any indication to the user that a destructive operation is about to take place. The only input from the user will be the selection of the partition. All other instances of destructive behavior require explicit confirmation from the user, or as Cole aptly put it, are loudly warned about by virt-manager. I wish that a safe alternative existed, but none does.

The attached patch implements this behavior.

>From cf05596823881448725cd67094f39c136144683b Mon Sep 17 00:00:00 2001
From: David Allan <dallan redhat com>
Date: Mon, 15 Mar 2010 14:04:41 -0400
Subject: [PATCH 1/1] Add fs pool formatting

* This patch reinstates pool formatting and adds a flag to enable overwriting of data.
 configure.ac                     |    5 +++++
 include/libvirt/libvirt.h.in     |    5 +++--
 libvirt.spec.in                  |    2 ++
 src/storage/storage_backend_fs.c |   34 +++++++++++++++++++++++++++++++++-
 tools/virsh.c                    |    8 +++++++-
 5 files changed, 50 insertions(+), 4 deletions(-)

diff --git a/configure.ac b/configure.ac
index 654b9a8..3869f45 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1300,12 +1300,15 @@ AM_CONDITIONAL([WITH_STORAGE_DIR], [test "$with_storage_dir" = "yes"])
 if test "$with_storage_fs" = "yes" -o "$with_storage_fs" = "check"; then
   AC_PATH_PROG([MOUNT], [mount], [], [$PATH:/sbin:/usr/sbin])
   AC_PATH_PROG([UMOUNT], [umount], [], [$PATH:/sbin:/usr/sbin])
+  AC_PATH_PROG([MKFS], [mkfs], [], [$PATH:/sbin:/usr/sbin])
   if test "$with_storage_fs" = "yes" ; then
     if test -z "$MOUNT" ; then AC_MSG_ERROR([We need mount for FS storage driver]) ; fi
     if test -z "$UMOUNT" ; then AC_MSG_ERROR([We need umount for FS storage driver]) ; fi
+    if test -z "$MKFS" ; then AC_MSG_ERROR([We need mkfs for FS storage driver]) ; fi
     if test -z "$MOUNT" ; then with_storage_fs=no ; fi
     if test -z "$UMOUNT" ; then with_storage_fs=no ; fi
+    if test -z "$MKFS" ; then with_storage_fs=no ; fi

     if test "$with_storage_fs" = "check" ; then with_storage_fs=yes ; fi
@@ -1316,6 +1319,8 @@ if test "$with_storage_fs" = "yes" -o "$with_storage_fs" = "check"; then
         [Location or name of the mount program])
         [Location or name of the mount program])
+        [Location or name of the mkfs program])
 AM_CONDITIONAL([WITH_STORAGE_FS], [test "$with_storage_fs" = "yes"])
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 0d1b5b5..77e6b8d 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -1052,8 +1052,9 @@ typedef enum {

 typedef enum {
   VIR_STORAGE_POOL_BUILD_NEW  = 0,   /* Regular build from scratch */
-  VIR_STORAGE_POOL_BUILD_REPAIR = 1, /* Repair / reinitialize */
-  VIR_STORAGE_POOL_BUILD_RESIZE = 2  /* Extend existing pool */
+  VIR_STORAGE_POOL_BUILD_REPAIR = (1 << 0), /* Repair / reinitialize */
+  VIR_STORAGE_POOL_BUILD_RESIZE = (1 << 1),  /* Extend existing pool */
+  VIR_STORAGE_POOL_BUILD_OVERWRITE = (1 << 2)  /* Overwrite data */
 } virStoragePoolBuildFlags;

 typedef enum {
diff --git a/libvirt.spec.in b/libvirt.spec.in
index a54d546..c6e0678 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -209,6 +209,8 @@ BuildRequires: util-linux
 # For showmount in FS driver (netfs discovery)
 BuildRequires: nfs-utils
 Requires: nfs-utils
+# For mkfs
+Requires: util-linux
 # For glusterfs
 %if 0%{?fedora} >= 11
 Requires: glusterfs-client >= 2.0.1
diff --git a/src/storage/storage_backend_fs.c b/src/storage/storage_backend_fs.c
index a83fc01..fe43af2 100644
--- a/src/storage/storage_backend_fs.c
+++ b/src/storage/storage_backend_fs.c
@@ -45,6 +45,7 @@
 #include "util.h"
 #include "memory.h"
 #include "xml.h"
+#include "logging.h"


@@ -498,8 +499,9 @@ virStorageBackendFileSystemStart(virConnectPtr conn ATTRIBUTE_UNUSED,
 static int
 virStorageBackendFileSystemBuild(virConnectPtr conn ATTRIBUTE_UNUSED,
                                  virStoragePoolObjPtr pool,
-                                 unsigned int flags ATTRIBUTE_UNUSED)
+                                 unsigned int flags)
+    const char *mkfsargv[5], *device = NULL, *format = NULL;
     int err, ret = -1;
     char *parent;
     char *p;
@@ -552,6 +554,36 @@ virStorageBackendFileSystemBuild(virConnectPtr conn ATTRIBUTE_UNUSED,
             goto error;
+        if (pool->def->source.devices == NULL) {
+            virStorageReportError(VIR_ERR_INVALID_ARG,
+                                  _("No source device specified when formatting pool '%s'"),
+                                  pool->def->name);
+            goto error;
+        }
+        device = pool->def->source.devices[0].path;
+        format = virStoragePoolFormatFileSystemTypeToString(pool->def->source.format);
+        VIR_DEBUG("source device: '%s' format: '%s'", device, format);
+        mkfsargv[0] = MKFS;
+        mkfsargv[1] = "-t";
+        mkfsargv[2] = format;
+        mkfsargv[3] = device;
+        mkfsargv[4] = NULL;
+        if (virRun(mkfsargv, NULL) < 0) {
+            virReportSystemError(errno,
+                                 _("Failed to make filesystem of "
+                                   "type '%s' on device '%s'"),
+                                 format, device);
+            goto error;
+        }
+    }
     ret = 0;
diff --git a/tools/virsh.c b/tools/virsh.c
index aa85ee6..4b2e3e9 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -4216,6 +4216,7 @@ static const vshCmdInfo info_pool_build[] = {

 static const vshCmdOptDef opts_pool_build[] = {
     {"pool", VSH_OT_DATA, VSH_OFLAG_REQ, N_("pool name or uuid")},
+    {"overwrite", VSH_OT_BOOL, 0, gettext_noop("overwrite existing data")},
     {NULL, 0, 0, NULL}

@@ -4224,6 +4225,7 @@ cmdPoolBuild(vshControl *ctl, const vshCmd *cmd)
     virStoragePoolPtr pool;
     int ret = TRUE;
+    int flags = 0;
     char *name;

     if (!vshConnectionUsability(ctl, ctl->conn, TRUE))
@@ -4232,7 +4234,11 @@ cmdPoolBuild(vshControl *ctl, const vshCmd *cmd)
     if (!(pool = vshCommandOptPool(ctl, cmd, "pool", &name)))
         return FALSE;

-    if (virStoragePoolBuild(pool, 0) == 0) {
+    if (vshCommandOptBool (cmd, "overwrite")) {
+    }
+    if (virStoragePoolBuild(pool, flags) == 0) {
         vshPrint(ctl, _("Pool %s built\n"), name);
     } else {
         vshError(ctl, _("Failed to build pool %s"), name);

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]