[Linux-cluster] Cluster vs Distributed? & MySQL Cluster?

Johannes russek johannes.russek at io-consulting.net
Fri Oct 27 13:50:50 UTC 2006


i'm sorry to hit just into this, but did i get it right that active/active
mysql does actually work?
regards, johannes

> -----Original Message-----
> From: linux-cluster-bounces at redhat.com
> [mailto:linux-cluster-bounces at redhat.com]On Behalf Of David Brieck Jr.
> Sent: Thursday, October 26, 2006 4:11 PM
> To: linux clustering
> Subject: Re: [Linux-cluster] Cluster vs Distributed? & MySQL Cluster?
>
>
> On 10/25/06, Michael Will <mwill at penguincomputing.com> wrote:
> > Are the actual data files shared in this setup between the active mysql
> > daemons?
> >
> > Last time I looked into this it seemed that with shared-nothing
> model each
> > mysql daemon would have to keep it's own copy of the data and updates
> > would be propagated from active to passive daemons (master-slave model)
> > or between active daemons (ndb in-ram database model)
> >
> > Are the mysql daemons running on the GFS I/O nodes that have access to
> > shared
> > storage via SAN or iSCSI and coordinate locking through GFS
> > infrastructure, or are
> > the mysql daemons running on client nodes that use GFS to remotely
> > access storage
> > that is provided by other  GFS I/O nodes that in turn have
> access to shared
> > storage via SAN or iSCSI?
> >
> > Michael
> >
>
> We're using GNBD for the nodes to connect to the storage. We don't
> have the fastest storage setup right now, but I'm hopeful that if
> everything works well we'll be purchasing a faster storage setup.
>
> As far as MySQL using GFS (excluding anything with active-active) and
> using DLM to do locks, here are some comparisons:
>
> Benchmark on GFS
>
> Benchmark DBD suite: 2.15
> Date of test:        2006-10-26  9:49:43
> Running tests on:    Linux 2.6.9-42.0.2.ELhugemem i686
> Arguments:           --small-test --tcpip --fast --fast-insert
> --lock-tables
> Comments:
> Limits from:
> Server version:      MySQL 4.1.20/
> Optimization:        None
> Hardware:
>
> alter-table: Total time: 94 wallclock secs ( 0.02 usr  0.01 sys +
> 0.00 cusr  0.00 csys =  0.03 CPU)
> big-tables: Total time:  4 wallclock secs ( 0.13 usr  0.14 sys +  0.00
> cusr  0.00 csys =  0.27 CPU)
> connect: Total time:  5 wallclock secs ( 0.38 usr  0.53 sys +  0.00
> cusr  0.00 csys =  0.91 CPU)
> create: Total time:  8 wallclock secs ( 0.02 usr  0.01 sys +  0.00
> cusr  0.00 csys =  0.03 CPU)
> insert: Total time: 17 wallclock secs ( 2.19 usr  1.99 sys +  0.00
> cusr  0.00 csys =  4.18 CPU)
> select: Total time: 13 wallclock secs ( 2.36 usr  1.03 sys +  0.00
> cusr  0.00 csys =  3.39 CPU)
>
> Benchmark on Local
>
> alter-table: Total time: 70 wallclock secs ( 0.02 usr  0.00 sys +
> 0.00 cusr  0.00 csys =  0.02 CPU)
> big-tables: Total time:  2 wallclock secs ( 0.11 usr  0.14 sys +  0.00
> cusr  0.00 csys =  0.25 CPU)
> connect: Total time:  4 wallclock secs ( 0.37 usr  0.55 sys +  0.00
> cusr  0.00 csys =  0.92 CPU)
> create: Total time:  1 wallclock secs ( 0.01 usr  0.00 sys +  0.00
> cusr  0.00 csys =  0.01 CPU)
> insert: Total time: 13 wallclock secs ( 2.27 usr  1.95 sys +  0.00
> cusr  0.00 csys =  4.22 CPU)
> select: Total time: 12 wallclock secs ( 2.21 usr  0.97 sys +  0.00
> cusr  0.00 csys =  3.18 CPU)
>
> It's pretty darn close and I'm willing to take a small performance hit.
>
> Here's some relevant info: local storage is RAID5 and GFS is RAID10
> and shared using CLVM, multipath, and GNBD. So the speed of the test
> locally would probably be faster if it were either RAID1 or 10, not 5.
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>




More information about the Linux-cluster mailing list