[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[dm-devel] Re: raid failure and LVM volume group availability



Neil Brown <neilb suse de> writes:

> On Tuesday May 26, goswin-v-b web de wrote:
>> hank peng <pengxihan gmail com> writes:
>> 
>> > Only one of disks in this RAID1failed, it should continue to work with
>> > degraded state.
>> > Why LVM complained with I/O errors??
>> 
>> That is because the last drive in a raid1 can not fail:
>> 
>> md9 : active raid1 ram1[1] ram0[2](F)
>>       65472 blocks [2/1] [_U]
>> 
>> # mdadm --fail /dev/md9 /dev/ram1
>> mdadm: set /dev/ram1 faulty in /dev/md9
>> 
>> md9 : active raid1 ram1[1] ram0[2](F)
>>       65472 blocks [2/1] [_U]
>> 
>> See, still marked working.
>> 
>> MfG
>>         Goswin
>> 
>> PS: Why doesn't mdadm or kernel give a message about not failing?
>
> -ENOPATCH :-)
>
> You would want to rate limit any such message from the kernel, but it
> might make sense to have it.
>
> NeilBrown

No rate risk in mdadm --fail reporting a failure to fail the device.

MfG
        Goswin


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]