[linux-lvm] Performance impact of LVM

Lamont R. Peterson peregrine at openbrainstem.net
Thu Jul 27 17:14:57 UTC 2006


On Thursday 27 July 2006 06:02am, Sander Smeenk wrote:
> Hello list!
>
> I've recently subscribed here as i have some questions about LVM,
> particularly the performance impact of LVM on disk IO.
>
> I'm a happy LVM user, on my workstation at home i've used it for a long
> time. No special setups or anything, but it's nice to be able to resize
> partitions on the fly, or have a number of disks act as one huge disk...
>
> So, when I had to reinstall all servers for the company I work for, i
> decided to use LVM for the same reasons stated above. But now i wonder:
> Does LVM have any impact on disk IO? Are there any tests done on this
> subject?
>
> I couldn't really find any on the internet. Most of the things you find
> are implementation issues and 'how does it work' stuff ;-)
>
> I'm running LVM2 (2.02.06) on Debian 'sid' (unstable, but i hate that word)
> using linux kernels 2.6.17.xx.
>
> For example, one of my servers has 4x 34gb SCSI disks and 2x IDE disks.
> One of the IDE disks has a 250MB boot partition, the rest is LVM
> partition, the other IDE disk has one big LVM partition, same goes for
> the 4 SCSI disks.
>
> Then i made a scsi_vg01, with all the scsi disks and a ide_vg01 with all
> the ide disks, and started lvcreating "partitions" inside those vg's.
> That's basically how i set up LVM on all of my servers. Some servers
> have different disk-configurations though...

Any particular reason to not include all the disks in a single VG?

Also, this setup will actually leave you more vulnerable to single disk 
failures.  I would *highly* recommend using RAID to aggregate your disks 
together, then use LVM on top of that to make things manageable.

> Can anyone shed any light on this approach? Are there impacts on
> performance of read / write actions? Any information is welcomed.

When you try to read or write to a VG, the LVM code is used by the VFS layer 
in the Kernel to decide the physical device/track/sector address to send the 
I/O operation to.

The only "extra" LVM I/O done is when you are (re)configuring LVM.  Things 
like creating, resizing & deleting an LV require a little bit of disk I/O, of 
course.  Other than the small amount of overhead when using snapshot volumes, 
there isn't any other impact on I/O performance.

However, I wonder if the LVM address look-up code is better than, equal to or 
any worse than that for a plain block device (e.g. partition, loopback 
mounted file, etc.).  If there is a statistically relevent delta there, I 
think it would only impact I/O latency and even then, it couldn't be much.

When booting your system, it does have to take a moment and "vgscan" for VGs.  
This is pretty fast, but it adds a second or two to your bootup time.

That's all I can think of off the top of my head.  HTH.
-- 
Lamont R. Peterson <peregrine at OpenBrainstem.net>
Founder [ http://blog.OpenBrainstem.net/peregrine/ ]
GPG Key fingerprint: 0E35 93C5 4249 49F0 EC7B  4DDD BE46 4732 6460 CCB5
  ___                   ____            _           _
 / _ \ _ __   ___ _ __ | __ ) _ __ __ _(_)_ __  ___| |_ ___ _ __ ___
| | | | '_ \ / _ \ '_ \|  _ \| '__/ _` | | '_ \/ __| __/ _ \ '_ ` _ \
| |_| | |_) |  __/ | | | |_) | | | (_| | | | | \__ \ ||  __/ | | | | |
 \___/| .__/ \___|_| |_|____/|_|  \__,_|_|_| |_|___/\__\___|_| |_| |_|
      |_|               Intelligent Open Source Software Engineering
                              [ http://www.OpenBrainstem.net/ ]
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20060727/f16ef181/attachment.sig>


More information about the linux-lvm mailing list