[dm-devel] Re: StorageWorks failover model (was: tools target for SLES9 SP2 and RHEL4 U2)

christophe varoqui christophe.varoqui at free.fr
Sun Jun 12 16:37:49 UTC 2005


On dim, 2005-06-12 at 16:33 +0200, Axel Thimm wrote:
> On Sun, Jun 12, 2005 at 02:31:26PM +0200, christophe varoqui wrote:
> > On dim, 2005-06-12 at 14:21 +0200, Axel Thimm wrote:
> > > Hi,
> > > 
> > > On Thu, Jun 09, 2005 at 09:34:53PM +0200, christophe varoqui wrote:
> > > > On jeu, 2005-06-09 at 20:15 +0100, Alasdair G Kergon wrote:
> > > > > On Thu, Jun 09, 2005 at 08:16:42PM +0200, christophe varoqui wrote:
> > > > > > Should we stabilize a 0.4.5 out of the git head
> > > > be aware I broke the StorageWorks failover model to satisfy the
> > > > expressed need to proactively fail paths in the DM when the checkers see
> > > > them going down.
> > > 
> > > What does that mean for StorageWorks users? I'm currently setting up a
> > > StorageWorks EVA3000 from scratch based on FC4 final. Will I stumble
> > > into any pitfalls, or would that only affect gits users?
> > > 
> > 0.4.4 should be ok. I don't know what FC packagers did though.
> 
> It's 0.4.4.2 with a couple of patches. Is that still OK?
> 
I guess yes.
You'll know soon enough in your testing ...

What you have to be careful about is the daemon reinstating paths in the
inactive PG. If it does so, you should be safe.

> > Also be aware you'll be best served using the failover policy for now :
> > there is a 20% performance impact with multi-path per PG.
> 
> The default multipath.conf ships with path_grouping_policy multibus
> (e.g. all 4 paths in one path group on a 2x active/2x failing
> controller setup). I understand that doing round robin over the active
> and failed paths will make performance drop.

> But what about path grouping with group_by_serial (like tur did,
> e.g. an active path group and a failing path group)? Is that eating
> performance, too? So I should prefer a path per PG (failover)?
> 
Yes, there is a performance cost with all topologies involving multiple
paths in 1 PG. True for multibus and group_by_serial in your setup.

multibus as a default is an error, which is corrected in the devel
branch. group_by_serial would be the adequate policy, left alone this
performance problem that will eventually gets sorted out.

Regards,
-- 
christophe varoqui <christophe.varoqui at free.fr>





More information about the dm-devel mailing list