[K12OSN] Raid Level Choice

Les Mikesell les at futuresource.com
Wed Dec 22 22:58:32 UTC 2004


On Wed, 2004-12-22 at 16:04, Liam Marshall wrote:

> so are you saying there is basically no advantage to hardware raid?

Not quite. If you use RAID5 it is better done in hardware, but you
need one of the high-end controllers (3ware etc.) that really does
the work in hardware instead of mostly in the drivers. RAID1 does
not involve much computation so there is not a big advantage to
dedicated hardware over just having each drive on it's own controller
(always true in SATA) so the access runs in parallel.

> I have effectively shot my wad getting the 4 drives and the raid 
> controller.  I was intending it to be both increasing capacity and 
> giving the system some sense of disaster recovery through raids ability 
> to rebuid or work with alternate mirrored drives

You will accomplish that any way you approach it.  I tend to not
trust new hardware until I've tested it myself and I especially
don't trust new hardware with Linux drivers that don't have a
lot of real-world testing.  I have done software RAID1 on an
assortment of drive types and controllers and have replaced
failed drives with the 'mdadm' tool, so I know that works.  I
also have some 3ware controllers that so far have not had any
problems.  With anything else I'd either have to do some extensive
testing of how the drive failure/rebuild works or find where someone
else had documented that kind of testing.

> The setup I am currently using has less than 30 GB of hard drive space, 
> all of it slow, 5400 rpm old scsi drives.  The faster newer larger SATA 
> drives should give me at least the equal to it performance wise.  I can 
> function now, I am just out of space and suspecting a drive of being on 
> the verge of failure.  each new drive is 80 GB in size, so even in 
> something like raid 10 I am getting 160 GB usable space, right?  over 5 
> times what I have now.  with raid 5 I would have even more.

You'll lose about a third of your space to raid5 compared to half for
raid1. 

> What I am looking to do is acheive a perfomance increase, however 
> slight, while insuring a better disaster recovery.

You should get that whether you run the controller in JBOD
(Just-a-Bunch-Of-Disks) mode with software raid or let it
do RAID5.  

> the controller I bought does hardware raid, so why wouldn;t I use that 
> instead of software raid via lvm, which in itself has a performance hit, 
> right? 

First you have to be sure that the controller is going to do all
the work.  Some of the SATA RAID controllers really do the work in
the drivers and since the drivers are new and less tested than Linux
MD arrays they are likely to be less efficient and more prone to bugs.
Also, look at what you have to do to rebuild the raid after a drive
fails - with some you would have to shut down for a fairly long
time to do it in the controller firmware. 

>  I just don;t know for sure whether to do 5 or 10.  People tell 
> me 5 is better, disaster recovery wise, but with a performance hit.  10 
> is faster, but no parity stuff is happening, so less disaster recovery

That's not quite true.  Raid5 uses a parity disk and computation to
fill in missing data.  Raid 1 keeps 2 full copies.  You can only
lose one disk at a time out of a raid5 set and access to the set
is likely to be slow until it is rebuilt.  With raid1 you can lose
more than one drive as long as each one that fails still has a
working mate, and there is no speed penalty to having failed drives
in the set.

If your controller really does do the work in hardware and offers
to automatically rebuild from a hot spare, another option besides
what you've mentioned would be to put 3 drives in a raid5 set with
another reserved as a hot spare.  This would give you less space
and less performance but would take care of itself completely when
a disk fails.  The opposite approach, if you don't trust the controller
and want to keep things simple is to make 2 RAID1 sets, mounting one
of them as /home.  Unless you need more space than a single drive
holds in /home you wouldn't need the extra complexity of raid0 striping
or LVM - just put /boot and / one one device and /home on the other.
Or are the SCSI drives going to stay in the machine too?

---
  Les Mikesell
   les at futuresource.com





More information about the K12OSN mailing list