[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] clvm mirroring target status



Thank you for your clarification.
If you are getting so many questions about this setup it's probably because people find it an appealing configuration. A good sign maybe for redhat to accomodate the pool a make it work that way. That would be true customer driven development. Just hinting ;)

Thanx for your help so far,

Regards,

Filip Sergeys


On Tue, 2005-02-08 at 18:37, Benjamin Marzinski wrote:
>     easy or reliable.  The easiest way to do it is to make host A to have both
>     disk A and disk B, and make host B have disk Am and Bm. To do this, GNBD import
>     the disks from host A, assemble the pool, GNBD import the disks from host B,
>     and use pool_mp to integrate them into the pool. This should automatically
>     set you up in failover mode, with disks A and B as the primary disks and disks
>     Am and Bm as the backups. I realize that this means that hostB is usually
>     sitting idle.
>     
> 
> This sounds hopefull
> 
>     If you name your devices correctly, or import them in a specific order, you
>     might be able to get pool to use the correct devices in the setup you
>     described, but I'm not certain.
>     
> 
> "might be able..."-> Now I lost hope: I borrowed the idea of this setup
> from the GFS admin guide, "the economy and performance" setup. 
> (http://www.redhat.com/docs/manuals/csgfs/admin-guide/s1-ov-perform.html#S2-OV-ECONOMY)
> Probably I misinterpreted the figure1-3, especially the disk part.
> Can you elaborate a bit more on: "if you name your devices correctly or
> import them in a specific order"?
> If I put Am and Bm on one machine, export them with gnbd and join them
> in the pool in failover mode, can I be sure there will be no writing on
> them? Because drbd won't let that happen.

If you put Am and Bm on one machine, pool should be fine.  That method
should always work. The downside is that one machine is sitting idle.

If you fink around with names and ordering of imports and stuff, you can
probably get the setup you originally described to work. The benefit of
this setup is that both machines are in use until one goes down.  However,
getting this setup to work may be trickier, and without looking at the pool
code, I don't know exactly what you need to do.

Sorry if my email wasn't clear.

And about the admin-guide.... Um.... If you misinterpreted figure 1-3, then
I did too. I wrote GNBD. I know all the testing that QA has done on it, and
I have never heard of this setup being tested.  I expect that somewhere,
there is a marketing person to blame. That's not to say that it won't work. The
tricky thing is to get pool to select the correct devices as primary ones and
drbd to failover before pool does (which happens right after the failed node
is fenced).

Thanks for pointing this out to me.  I was wondering why I was getting so
many questions about drbd under gnbd. And this explains it.

-Ben
 
> Thanx,
> 
> Filip Sergeys
> 
>     What your design actually wants is for pool to not do multipathing at all, but
>     to simply retry on failed IO.  That way, when the virtual IP switches, gnbd
>     will just automatically pick up the device at its new location. Unfortunately,
>     pool and gnbd cannot do this.
>     
>     -Ben
>      
>     > Concequences:
>     > -------------------
>     > Bringing host B back in the game needs a manual intervention. 
>     > -Basically al services on the cluster nodes need to stop writing. 
>     > -Sync the disk from Bm to B
>     > -Give host B back its virtual ip address
>     > -mount B read/write
>     > -umount Bm in host A
>     > -start all services again on the nodes.
>     > => I know this is not perfect. But we can live with that. This will need to 
>     > happen after office hours. The thing is that we don't have the budget for 
>     > shared storage and certainly not for a redundant shared storage solution 
>     > because most entry level shared storages are SPOFs. 
>     > 
>     > I need to find out more about that multipathing. I am not sure how to use it 
>     > in this configuration. 
>     > If you have idea's for improvement, they are welcome. 
>     > 
>     > Regards,
>     > 
>     > Filip
>     > 
>     > PS. Thanx for your answer on the clvm mirroring state.
>     > 
>     >  
>     > 
>     > 
>     > 
>     > 
>     > On Friday 04 February 2005 21:00, Benjamin Marzinski wrote:
>     > > On Fri, Feb 04, 2005 at 05:52:31PM +0100, Filip Sergeys wrote:
>     > > > Hi,
>     > > >
>     > > > We are going to install a linux cluster with 2 gnbd servers (no SPOF)
>     > > > and gfs + clvm on the cluster nodes (4 nodes). I have two options, if I
>     > > > read the docs well, for duplicating data on the gnbd servers:
>     > > > 1) using clvm target mirroring on the cluster nodes
>     > > > 2) use drbd underneath to mirror discs. Basically two disks per machine:
>     > > > 1 live disk which is mirrored with drbd to the second disk in the second
>     > > > machine and the other way around in the second machine
>     > > > (so the second disk in the first machine is thus the mirror from the
>     > > > first (="live") disk in the second machine(sounds complicated, but it is
>     > > > just hard to write down)).
>     > > > Both live disks from each machine will be combined as one logical disk
>     > > > (If I understood well, this is possible).
>     > > >
>     > > > Question: what is the status of clvm mirroring? Is it stable?
>     > > > Suppose it is stable, so I have a choice: which one of the options would
>     > > > any of you choose? Reason? (Stability, performance, ...)
>     > >
>     > > I'm still not sure if cluster mirroring is available for testing (I don't
>     > > think that it is). It's defintely not considered stable.
>     > >
>     > > I'm also sort of unsure about your drbd solution.
>     > > As far as I know, drbd only allows write access on one node at a time. So,
>     > > if the first machine uses drbd to write to a local device and one on the
>     > > second machine, the second machine cannot write to that device. drbd is
>     > > only useful for active passive setups.  If you are using pool multipathing
>     > > to multipath between the two gnbd servers, you could set it to failover
>     > > mode, and modify the fencing agent that you are using to fence the
>     > > gnbd_server, to make it tell drbd to fail over when you fence the server.
>     > >
>     > > I have never tried this, but it seems reasonable. One issue would be how to
>     > > bring the failed server back up, since the devices are going to be out of
>     > > sync.
>     > >
>     > > http://www.drbd.org/start.html says that drbd still only allows write
>     > > access to one node at a time.
>     > >
>     > > sorry :(
>     > >
>     > > -Ben
>     > >
>     > > > I found two hits on google concerning clvm mirroring, but both say it is
>     > > > not finished yet. However the most recent one is from june 2004.
>     > > > I cannot test either because we have no spare machine. I'm going to buy
>     > > > two machine so I need to know which disk configuration I will be using.
>     > > >
>     > > > Thanks in advance,
>     > > >
>     > > > Regards,
>     > > >
>     > > > Filip Sergeys
>     > > >
>     > > >
>     > > >
>     > > > http://64.233.183.104/search?q=cache:r1Icx--aI2YJ:www.spinics.net/lists/g
>     > > >fs/msg03439.html+clvm+mirroring+gfs&hl=nl&start=12
>     > > > https://www.redhat.com/archives/linux-cluster/2004-June/msg00028.html
>     > > >
>     > > > --
>     > > > *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
>     > > > * System Engineer, Verzekeringen NV *
>     > > > * www.verzekeringen.be              *
>     > > > * Oostkaai 23 B-2170 Merksem        *
>     > > > * 03/6416673 - 0477/340942          *
>     > > > *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
>     > > >
>     > > > --
>     > > > Linux-cluster mailing list
>     > > > Linux-cluster redhat com
>     > > > http://www.redhat.com/mailman/listinfo/linux-cluster
>     > >
>     > > --
>     > > Linux-cluster mailing list
>     > > Linux-cluster redhat com
>     > > http://www.redhat.com/mailman/listinfo/linux-cluster
>     > 
>     > --
>     > Linux-cluster mailing list
>     > Linux-cluster redhat com
>     > http://www.redhat.com/mailman/listinfo/linux-cluster
>     
>     --
>     Linux-cluster mailing list
>     Linux-cluster redhat com
> 
> http://www.redhat.com/mailman/listinfo/linux-cluster
> -- 
> *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
> * System Engineer, Verzekeringen NV *
> * www.verzekeringen.be              *
> * Oostkaai 23 B-2170 Merksem        *
> * 03/6416673 - 0477/340942          *
> *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*

> --
> Linux-cluster mailing list
> Linux-cluster redhat com
> http://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster redhat com
http://www.redhat.com/mailman/listinfo/linux-cluster
-- 
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
* System Engineer, Verzekeringen NV *
* www.verzekeringen.be              *
* Oostkaai 23 B-2170 Merksem        *
* 03/6416673 - 0477/340942          *
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]