call for testing, dmraid in rawhide

Laurent Jacquot jk at lutty.net
Tue Dec 13 20:56:52 UTC 2005


hello,
Le mardi 13 décembre 2005 à 11:48 -0500, Peter Jones a écrit :
> On Tue, 2005-12-13 at 06:23 -0500, Build System wrote:
> 
> > anaconda-10.90.18-1
> ...
> > * Sun Dec 11 2005 Peter Jones <pjones at redhat.com> - 10.90.17-1
> > - Full dmraid support.  (still disabled by default)
> 
> As the changelog says, last night's rawhide build has support for dmraid
> during installation.  If anybody wants to test this, I'd be really
> appreciative ;)
> 
> A couple of ground rules/caveats:
> 
> 1) Right now on a default install /boot doesn't get mounted after
> install.  In general,
>    "mount -a" doesn't work just yet, and "fsck -a" probably has similar
> issues.  Changing
>    fstab to point at the device instead of a label will probably fix it
> (I haven't tried
>    that yet ;)
> 2) It's expecting a partition table on the raid, not a raid on a
> partition.  AFAIK this
>    is how all BIOSes actually lay out the metadata, so that should be
> normal
> 3) RAID 0, 1, and (in some cases with some BIOSes) RAID 1+0 only.  No
> RAID 5 or RAID 6
>    yet, even if your BIOS does it.
> 4) You'll probably get a nasty failure if you're doing RAID 1 and your
> drives aren't
>    synced already.  (Heinz, we probably should discuss this some)
> 5) If you move disks that have RAID metadata onto a controller/BIOS that
> doesn't support
>    it, the installer is still going to think they're perfectly good, and
> it'll install
>    grub on them, etc.  Don't do that.  It won't work.
> 6) If you've added support already and you do an upgrade, it almost
> certainly won't
>    work.  I've got no intention of making this work, either.  Sorry.
> 7) Bug reports should go to bugzilla.redhat.com .  File them against
> anaconda; if they
>    need to be assigned somewhere else, we'll reassign it.
> 
> So, without further fanfare:
> 
> To enable this, add "dmraid" to the installer boot command line.
> 
> -- 
>   Peter
> 
My box is installed there on two sata disk:
[root at jack ~]# dmraid -r
/dev/sda: nvidia, "nvidia_egeafiab", mirror, ok, 390721966 sectors,
data@ 0
/dev/sdb: nvidia, "nvidia_egeafiab", mirror, ok, 390721966 sectors,
data@ 0

[root at jack ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
      1020032 blocks [2/2] [UU]

md3 : active raid1 sdb5[1] sda5[0]
      152360320 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
      40957632 blocks [2/2] [UU]

unused devices: <none>

This is software raid without BIOS help, and I'm using lvm on those raid
devices.

[root at jack ~]# pvs
  PV         VG     Fmt  Attr PSize   PFree
  /dev/md1   rootvg lvm2 a-    39,03G 13,19G
  /dev/md3   datavg lvm2 a-   145,28G     0

and md0 is an ext3 /boot

What do you mean in "2)"? I have partition tables on the raid (lvm
stuff) _and_ raid on partitions..

Is this supported, am I failing in the 6) category? I was planning a
reinstall but cannot afford loosing datavg

TIA

        Laurent





More information about the fedora-devel-list mailing list