[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [linux-lvm] General Q on adding new PVs to existing striped RAID set



On 12/07/2010 04:15 AM, hansbkk gmail com wrote:
> Example - if I have say 6TB of RAID10 managed by LVM, and then find I
> need to add another 2TB, I might add a RAID1 mirror of 2x 2TB drives
> and extend my VG to get more space. However it seems for optimum
> performance and fault tolerance I should  consider this a somewhat
> temporary workaround. Ultimately I would want to get back to a
> coherent single 8TB RAID set, and take the RAID1 mirror out.
>
> This seems to be better from multiple POVs - more space efficient,
> more consistently predictable performance (if not actually faster),
> and most importantly more fault tolerant.
Raid1 is just as fault tolerant as raid10.  Raid10 just adds striping
for performance.  You are probably thinking that with more devices, the
probability of 2 devices failing close together is larger.  However,
going from 4 to 6 (or 8) devices does not significantly decrease
reliability compared to the increase provided by raid1 (with or without
striping).
> I realize if I was using RAID6 as many have advised I should, I could
> grow the underlying RAID itself directly and then expand the VG, and
> obviously this would be ideal. But I wanted to clarify regarding those
> RAID flavors that can't be grown in place, as a principle of best
> practice.
You can grow raid1/raid10 also.  Raid6 (and 5) has the read/modify/write
cycle issue that affects random write performance.
> Please confirm if my thinking on this is on the right track.
LVM supports raid0 and 1 (and other levels in testing).  The only reason
I use raid1 "underneath" is that LVM is still maturing in raid support
(compared to, say, AIX).  LVM raid0 is quite good, however.  Consider
making all your PVs raid1, and doing your striping in LVM.  That is the
equivalent of raid10 and gives you much more flexibility in growing your
VG than raid10 underneath.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]