Help! Raid-5 with 4 drives, one bad and one wants to be a spare
Gilboa Davara
gilboada at netvision.net.il
Fri Sep 23 00:06:32 UTC 2005
How many drives were actually members of the raid?
3 + 1 or 2 + 1 + 1?
Please post the raid configuration.
Gilboa
On Thu, 2005-09-22 at 12:46 -0500, Ed K. wrote:
> I had a computer running FC1 for a very long while. All 4 ide drives are a
> part of a Raid-5 array.
>
> Then a drive (#4) crapped out, and took down the other drive (#3) on the
> same ide bus... I would like to turn the raid back on, but when I do the
> raid subsystem thinks its spare.
>
> Q: How can I turn the raid back on in degraded mode without the #3 drive
> being used as a spare?
>
> ...waiting for some pointers so I can sleep tonight, so any help would be
> most appreciated.
>
> ed
>
> p.s.:
> I've booted the system in knoppix v3.9 now...
>
> here are the commands:
>
> root at 1[~]# mdadm --assemble /dev/md2 -R /dev/hda4 /dev/hdb4 /dev/hdc1
> mdadm: failed to RUN_ARRAY /dev/md2: Invalid argument
> root at 1[~]# mdadm -D /dev/md2
> /dev/md2:
> Version : 00.90.01
> Creation Time : Mon Feb 23 21:13:37 2004
> Raid Level : raid5
> Device Size : 117185984 (111.76 GiB 120.00 GB)
> Raid Devices : 4
> Total Devices : 3
> Preferred Minor : 2
> Persistence : Superblock is persistent
>
> Update Time : Thu Sep 22 14:07:32 2005
> State : active, degraded
> Active Devices : 2
> Working Devices : 3
> Failed Devices : 0
> Spare Devices : 1
>
> Layout : left-asymmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> 0 3 4 0 active sync
> 1 3 68 1 active sync
> 2 0 0 - removed
> 3 0 0 - removed
>
> 4 22 1 - spare
> root at 1[~]# mdadm -S /dev/md2
More information about the fedora-list
mailing list