[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[linux-lvm] Changing dev_t-devname mapping in lvmlib seems to be problematic



Hello Zdenek,
I am testing a following scenario:

I have 5 dm-linear devices, which I have setup manually using dmsetup.
Their table points at some local disks, like this:
/dev/mapper/alex0 => /dev/sda
/dev/mapper/alex1 => /dev/sdb

I create a VG on top of these 5 dm devices, using pvcreate and then
lvmlib APIs. The VG has no LVs at this point.
Later I teardown all the dm devices (using dmsetup remove).
Then I recreate the 5 dm devices, I give them the same names and setup
the same linear tables.

The difference is that I create them in different order.
So, for example, previously I had /dev/mapper/alex0 pointing at
/dev/sda, and it's real devnode was /dev/dm-0 (251:0), now
/dev/mapper/alex0 still points at /dev/sda, but its devnode is
/dev/dm-1 (251:1).
The names that I feed to LVM are always  /dev/mapper/alex0,
/dev/mapper/alex1 ...

In my application (at present) the lvm_t handle is not closed until
the application exits.

The issue that I see is in _cache.devices handling: it maps devt to
'struct device*' objects. So when searching for 251:1, a stale entry
for 251:1 is found (former /dev/mapper/alex1). So it contains an old
list of aliases in dev->aliases, and now a new name is added there, so
it contains now both /dev/mapper/alex0 and /dev/mapper/alex1 (and also
other names)...

In addition, this entry has dev->pvid of the /dev/mapper/alex1 PV.

Further I see a call to lvmcache_add with the following parameters:
pvid = correct pvid of /dev/mapper/alex0 (pointing at /dev/sda)
dev->pvid = the pvid of /dev/mapper/alex1

As a result, the _pvid_hash gets messed up...basically it ends up
having 4 PVs instead of 5. I am attaching a text file, which traces
the contents of the _pvid_hash during lvmcache_add call (I added some
prints there).

So it looks like having an open lib_t handle cannot survive such a
change in dev_t.

I have a couple of questions:
- Is my analysis (at least more or less) correct?
- Is this generally a bad idea to have dm-linear devices for PVs? (I
guarantee that dm-linear table is always setup correctly).
- Will using a fresh lib_t handle for each LVM operation solve this
issue? Because the command-line tools, when invoked, seem to work fine
(and they build a fresh cache each time). I will guarantee that nobody
else touches relevant dm devices during the LVM operation.
- I also see that LVM scans devices like /dev/disk/by-id... while in
lvm.conf I set the filter to accept only /dev/mapper/alex (all others
I set to reject). What am I missing?

Thanks for your help,
  Alex.
lvmcache_add
Oct 18 18:01:18 vc-0-0-7-01--nightly--118 prog: &&&&&&&&&& Before: pvid=5x2JQxKi6GG3Mtw0ZrX2YPR9RCaYU0Z1, dev=251:5, dev->pvid=Cx4gK1NwW3IUGU7Dj1Tw4TXH0G65fw2Q

_pvid_hash before:
Oct 18 18:01:19 nightly--118 prog: &&&&&&& lvmcache_info 0x722138: key=PYzixfmLTDdpJrCR67kVZTBC7USJY0wy, pvid=PYzixfmLTDdpJrCR67kVZTBC7USJY0wy, 251:9
Oct 18 18:01:19 nightly--118 prog: &&&&&&& lvmcache_info 0x70b2a8: key=Cx4gK1NwW3IUGU7Dj1Tw4TXH0G65fw2Q, pvid=Cx4gK1NwW3IUGU7Dj1Tw4TXH0G65fw2Q, 251:5
Oct 18 18:01:19 nightly--118 prog: &&&&&&& lvmcache_info 0x721d98: key=5x2JQxKi6GG3Mtw0ZrX2YPR9RCaYU0Z1, pvid=5x2JQxKi6GG3Mtw0ZrX2YPR9RCaYU0Z1, 251:3
Oct 18 18:01:19 nightly--118 prog: &&&&&&& lvmcache_info 0x70b9c8: key=QFx9B2pcEHEzgbGImcTPpXedROsx0t9z, pvid=QFx9B2pcEHEzgbGImcTPpXedROsx0t9z, 251:1
Oct 18 18:01:19 nightly--118 prog: &&&&&&& lvmcache_info 0x75aeb8: key=amf0lOCoaJyoXCkfsCaBkWDJhgircufZ, pvid=amf0lOCoaJyoXCkfsCaBkWDJhgircufZ, 251:7

'existing' pointer:
Oct 18 18:01:19 nightly--118 prog: &&&&&&& lvmcache_info 0x721d98: key=5x2JQxKi6GG3Mtw0ZrX2YPR9RCaYU0Z1, pvid=5x2JQxKi6GG3Mtw0ZrX2YPR9RCaYU0Z1, 251:3

Error message:
Oct 18 18:09:18 nightly--118 prog: Found duplicate PV 5x2JQxKi6GG3Mtw0ZrX2YPR9RCaYU0Z1: using /dev/mapper/vpart-52481 not /dev/disk/by-id/dm-name-vpart-52481

existing->dev = dev
Oct 18 18:01:19 nightly--118 prog: &&&&&&& lvmcache_info 0x721d98: key=5x2JQxKi6GG3Mtw0ZrX2YPR9RCaYU0Z1, dev->pvid=Cx4gK1NwW3IUGU7Dj1Tw4TXH0G65fw2Q, 251:5

_lvmcache_update_pvid:
remove entry with key Cx4gK1NwW3IUGU7Dj1Tw4TXH0G65fw2Q
_pvid_hash:
Oct 18 18:01:19 nightly--118 prog: &&&&&&& lvmcache_info 0x722138: key=PYzixfmLTDdpJrCR67kVZTBC7USJY0wy, pvid=PYzixfmLTDdpJrCR67kVZTBC7USJY0wy, 251:9
Oct 18 18:01:19 nightly--118 prog: &&&&&&& lvmcache_info 0x721d98: key=5x2JQxKi6GG3Mtw0ZrX2YPR9RCaYU0Z1, pvid=5x2JQxKi6GG3Mtw0ZrX2YPR9RCaYU0Z1, 251:3
Oct 18 18:01:19 nightly--118 prog: &&&&&&& lvmcache_info 0x70b9c8: key=QFx9B2pcEHEzgbGImcTPpXedROsx0t9z, pvid=QFx9B2pcEHEzgbGImcTPpXedROsx0t9z, 251:1
Oct 18 18:01:19 nightly--118 prog: &&&&&&& lvmcache_info 0x75aeb8: key=amf0lOCoaJyoXCkfsCaBkWDJhgircufZ, pvid=amf0lOCoaJyoXCkfsCaBkWDJhgircufZ, 251:7

'info' pointer gets dev->pvid updated:
Oct 18 18:01:19 nightly--118 prog: &&&&&&& lvmcache_info 0x721d98: key=5x2JQxKi6GG3Mtw0ZrX2YPR9RCaYU0Z1, dev->pvid="5x2JQxKi6GG3Mtw0ZrX2YPR9RCaYU0Z1", 251:5

trying to insert 'info', but another entry with the same key already exists...

from here we basically lost one PV in _pvid_hash




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]