LVM1 to LVM2 plans for FC2

Paul Jakma paul at dishone.st
Tue Feb 10 01:47:23 UTC 2004


On Tue, 9 Feb 2004, Alexandre Oliva wrote:

> Well...  See, creating/renaming/removing one LV is still easier
> than doing it on 3-5 LVs.  No matter how decent such management is.

Not if you must boot to a rescue CD to do it. Besides, the shell 
takes care of repetition for me :).

> But see, you don't need the space to be contiguous, that's one of
> the beauties of LVM.  You can do live pvmove and optimize the
> system for whatever use you like (see LVreorg in my home page).  

Right. Whether you have one or many LVs make no difference though.

> Well, for one, /var/spool/mail is a soft link to a directory in a
> huge filesystem for me. 

Surely that is what is mount and /etc/fstab are for? :)

(though, mine is a symlink too, but due to autofs).

> I'm talking about fragmenting data across multiple partitions.  If
> you install a package onto a single filesystems, odds are that all
> of the data files, libraries and binaries will be close to each
> other in the disk.  But if you have say libs in one partition,
> binaries in another, config files in another and data files in yet
> another, there's going to be a lot of disk heads movement that you
> could have saved by using a single filesystem.

For a set of filesystems on a single disk or RAID1 system, possibly
yes.  However, for many-spindles + RAID + LVM, you've already ceded
some control over which blocks are used.

Especially if your system has been in use for quite a while and you
have resized and created and deleted LVs, your free physical extents
themselves will be fragmented, and so subvert some of the clustering
optimisations the fs tries to do. As of course will fragmentation
within the FS itself have reduced locality.

So, unless you size capacity such that you are guaranteed to _always_
have more than plenty, fragmentation _will_ come into play. If you do
intend to actually use the capacity you have, instead, just optimise
your disk setup for the general case, try improve the worst-case
(even if at the expense of best-case). (as many spindles as possible,
appropriately RAIDed).

Ie, I'd much rather have a storage system whose performance was a
steadyish line than a bell curve (eg many spindle RAID versus a
single disk for seek times). Even if the curve's best-case was
significantly higher than the steady-line best-case (though not too
much obviously). Because then, fragmentation is simply not something
to get worried about, and I have better things to worry about. :)

> Because filesystems tend to do it for you, to a point.  But by
> breaking filesystems up into small pieces, you stop it from helping
> you.

Depends.

> Well, we all know how slow RAID 5 is.

Read is OK. Write sucks. But it was best compromise between space and 
efficiency. That has perhaps changed given todays humungous drive 
sizes. Seek time beats RAID1 though.

HPA's RAID6 code seems interesting. I've not seen performance numbers 
for it though.

> I've recently moved most of my data to RAID 1 + LVM, and
> performance has improved significantly. I only keep multi-media,
> seldom modified data in RAID 5, just to save a few hundred GiBs I'd
> waste with RAID 1.

Well, I have 36GB of space across 6 9GB spindles. Trying to eek out
every GB because SCSI disks are just so damn expensive/GB. Will move
to SATA and software RAID I think. RAID1 across 2 SATA spindles would
probably provide 200GB useable storage as well as better performance!  
(how times change, RIP SCSI.).

> All that said, there's a lot of room for personal preferences and for
> different install/upgrade strategies.  

Absolutely. :)

> I'm probably jumping out of this thread for now :-) 

And same, it has meandered slightly :)

> Thanks for your insights.

And yours.

regards,
-- 
Paul Jakma	paul at clubi.ie	paul at jakma.org	Key ID: 64A2FF6A
	warning: do not ever send email to spam at dishone.st
Fortune:
It's better to burn out than it is to rust.





More information about the fedora-devel-list mailing list