[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[linux-lvm] Performance improved during pvmove??

Trying to sort out an odd one.

Have a RHEL 5.6 VM running on top of ESXi 4.1 backed by an NFS data

Had a VG defined comprised of two PV's, one of which was not 4k aligned
properly.  The system showed a lot of iowait, ESXi's Performance stats
showed high IO latency response times and there were noticeable pauses
and glitches when using the system.

I decided to use pvmove to rectify this situation (ref below) by
growing the disk on which the correctly aligned PV lived, adding a
second PV there and pvmoving the "bad" PV to this new PV.

            PV1 /dev/sdb1 (not aligned correctly)
            PV2 /dev/sdc1

            PV2 /dev/sdc1
            PV3 /dev/sdc2

As soon as I kicked off the pvmove (pvmove -v /dev/sdb1), my iowait
dropped to normal levels, and ESXi's performance graphs indicating
write latency dropped to as low as I've seen them.

Interaction with the system became "normal" with no glitchiness

Normal file system activity continued (we didn't take the system down
for this workload).

After about 8 hours the pvmove finished and I removed /dev/sdb from the
VG and from the system.  Almost immedaitely the IO wait times spiked
again, ESXi is once again showing spikes of 500+ ms latency on IO
requests and the system became glitchy again from the console.

I'd say it's not as bad as before, but what gives?  Why were things
great *during* the pvmove, but not after?

I'm wondering if I goofed by using two PV's on the same disk with this
type of setup.  A single I/O request might need to be serviced by
multiple requests to the completely different spots in the same
physical disk....

I may look to create a brand new physical disk and just migrate all of
the data there.

Anyone have any thoughts?


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]