[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] How do you HA your storage?



You could probably do what you want using san level mirroring across
two sans and device-mapper-multipath. I believe the sans will
automatically put the alternate san copies into read/write if it
cannot communicate with the first if it's configured to do so but I
don't have access to that capability on my EVA for lack of license.
Actually, it woulnd't require a whole other san but if I was mirroring
things, that's what I'd opt for. This is the kind of problem that DMP
was designed to handle. If you are booting from the san, you may have
some other tweeks but in general I think DMP is still the way to go.

Good luck

-C


On Sat, Apr 30, 2011 at 6:00 PM, urgrue <urgrue bulbous org> wrote:
> On 30/4/11 14:27, Corey Kovacs wrote:
>
> This has nothing to do with any network. It's all over the fiber...
>
> True, my bad, I was thinking of DRBD.
>
>> Points in time? It's a raid 1, it's relatively instant. It's more
>> complex to manage a failover in the way you describe if anything.
>
> I didn't mean that. What I meant is with any enterprise storage filer I can
> walk in and take a point in time snapshot of my entire datacenter - all
> hundreds of servers - with almost no effort. And restore it. That's a pretty
> fantastic thing to be able to do before, say, a major upgrade on hundreds of
> servers. And you manage all of it in one place. Take a situation like if the
> company decides it needs a third copy of the data. It'd be a fun job to map
> and configure the third LUN on 500 servers, when on the SAN it'd be a a few
> minutes to configure. Or if that third copy needs to be async instead, I
> don't even think you can do that with LVM or software raid.
> Host-based mirroring is great for many situations, but when it comes to
> larger environments, I think most companies tend to prefer SAN mirroring.
>
>> Well, my $0.02 anyway.
>>
>> -C
>>
>> On Sat, Apr 30, 2011 at 11:03 AM, urgrue<urgrue bulbous org>  wrote:
>>>
>>> Yes, these work, but then I'm having each server handle the job of
>>> mirroring
>>> their own disks, which has some disadvantages. Network usage instead of
>>> fiber, more complex management of points-in-time compared to a nice big
>>> fat
>>> centralized SAN, etc. In my experience most companies favor SAN-level
>>> replication.
>>> The challenge is just getting Linux to recover gracefully when the SAN
>>> fails
>>> over. Worst case you can just reboot, but, that's not very HA.
>>>
>>>
>>> On 30/4/11 13:23, Corey Kovacs wrote:
>>>>
>>>> What you seem to be describing is the mirror target for device mapper.
>>>>
>>>> Another alternative would be to setup a software raid using multipath'd
>>>> luns.
>>>>
>>>> SANVOL1            SANVOL2
>>>>    |                           |
>>>>    \                          /
>>>>     \                       /
>>>>       \                   /
>>>>     MPATH1    MPATH2
>>>>          \             /
>>>>        RAID 1 DEV
>>>>                |
>>>>              PV
>>>>                |
>>>>               VG
>>>>                |
>>>>               LV
>>>>
>>>> That might work
>>>>
>>>> -C
>>>>
>>>>
>>>> On Sat, Apr 30, 2011 at 10:08 AM, urgrue<urgrue bulbous org>    wrote:
>>>>>
>>>>> But, how do you get dm-multipath to consider two different LUNs to be
>>>>> in
>>>>> fact two paths to the same device?
>>>>> I mean, normally multipath has two paths to one device.
>>>>> When we're talking about san-level mirroring, we've got two paths to
>>>>> two
>>>>> different devices (which just happen to contain identical data).
>>>>>
>>>>> On 30/4/11 11:47, Kit Gerrits wrote:
>>>>>>
>>>>>> With dual-controller arrays, dm-multipath  keeps checking if the
>>>>>> current
>>>>>> device is still responding and switches to a different path if it is
>>>>>> not.
>>>>>> (for examply, by reading sector 0)
>>>>>>
>>>>>> With SAN failover, you may need to tell the secondary SAN LUN to go
>>>>>> into
>>>>>> read-write mode.
>>>>>> Unfortunately, I am not familiar with tying this into RHEL.
>>>>>> (also, sector 0 will already be readable on the secundary LUN, but not
>>>>>> writable)
>>>>>>
>>>>>> Maybe there is a write test, which tries to write to both SANs
>>>>>> The one which allows write access will become the active LUN.
>>>>>>
>>>>>> If you can switch your SANs inside 30 seconds, you might even be able
>>>>>> to
>>>>>> salvage/execute pending write operations.
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> Kit
>>>>>>
>>>>>> -----Original Message-----
>>>>>> From: linux-cluster-bounces redhat com
>>>>>> [mailto:linux-cluster-bounces redhat com] On Behalf Of urgrue
>>>>>> Sent: zaterdag 30 april 2011 11:01
>>>>>> To: linux-cluster redhat com
>>>>>> Subject: [Linux-cluster] How do you HA your storage?
>>>>>>
>>>>>> I'm struggling to find the best way to deal with SAN failover.
>>>>>> By this I mean the common scenario where you have SAN-based mirroring.
>>>>>> It's pretty easy with host-based mirroring (md, DRBD, LVM, etc) but
>>>>>> how
>>>>>> can
>>>>>> you minimize the impact and manual effort to recover from losing a
>>>>>> LUN,
>>>>>> and
>>>>>> needing to somehow get your system to realize the data is now on a
>>>>>> different
>>>>>> LUN (the now-active mirror)?
>>>>>> --
>>>>>> Linux-cluster mailing list
>>>>>> Linux-cluster redhat com
>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>
>>>>>> --
>>>>>> Linux-cluster mailing list
>>>>>> Linux-cluster redhat com
>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>
>>>>> --
>>>>> Linux-cluster mailing list
>>>>> Linux-cluster redhat com
>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>
>>>> --
>>>> Linux-cluster mailing list
>>>> Linux-cluster redhat com
>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>
>>> --
>>> Linux-cluster mailing list
>>> Linux-cluster redhat com
>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster redhat com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> Linux-cluster mailing list
> Linux-cluster redhat com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]