[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [linux-lvm] clvmd leaving kernel dlm uncontrolled lockspace



Am 06.06.13 13:06, schrieb matthew patton:
--- On Thu, 6/6/13, Andreas Pflug <pgadmin pse-consulting de> wrote:

On a machine being Xen host with 20+ running VMs I'd clearly
prefer to clean those orphaned memory space and go on.... I
This is exactly why it is STRONGLY suggested you split your storage tier from your compute tier. The lowest friction method would be a pair that hold the disks (or access a common disk set) and export it as NFS. The compute nodes can speed things up with CacheFS for their local running VMs assuming you shepherd the live-migration process.

The Xen hosts are iscsi initiators, but their usage of the san-located vg has to be coordinated, using clvmd. It's just what xcp/xenserver does, but with clvmd to insure locking (apparently xcp/xenserver relies on friendly behaviour, using no locking)

If the VMs all want to have a shared filesystem for a running app and the app can't be written to work safely with NFS (why not?) then you can run corosync and friends +GFS2 at that level.

The VMs have their private devices, each a LV on a san-vg.

Regards
Andreas


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]