[Linux-cluster] GFS on 2.6.8.1 more simple performance numbers

Wim Coekaerts wim.coekaerts at oracle.com
Sat Oct 16 04:07:46 UTC 2004


can someone tell me what the focus of this list is, for sure ?
eg, is this gfs only ? or clustering for 2.6 kernel in general eg we can
also bring up ocfs2 or should I shut up on the ocfs2 side ? 


On Fri, Oct 15, 2004 at 05:15:05PM -0700, Daniel McNeil wrote:
> I had more time to test GFS.  Reminder of the setup
> (note: I added more memory so the machines are up to 1GB).
> 3 machines each:
>         2 processor (800 MHZ Pentium 3)
>         1GB of memory
>         2 100Mb ethernet (1 public, 1 private)
>         1 2-port Qlogic FC host adapter
> 2 F/C sitches cascaded together
> 1 - 10 disk - dual controller FASTT200 (36GB 10,000rpm drives)
> 
> The command run was 'time tar xf /Views/linux-2.6.8.1.tar; 
> time sync' where /Views is an NFS mounted file system and 
> the current working directory is in a clean file system on
> a 5-disk stripe 64k stripe width).  For the 2 node case, 
> I ran the command in separate directories on each node.
> For comparison, the ext3 file system in on a single scsi
> disk in data=ordered.
> 
> 
> Tar
> ---		real		user		sys
> ext3 tar	0m6.535s	0m0.429s	0m4.010s
> ext3 sync	0m21.953s	0m0.000s	0m0.574s
> 	
> gfs 1 node tar 	1m15.286s 	0m0.787s	0m17.085s
> gfs 1 node sync	0m7.734s 	0m0.000s 	0m0.190s
> 
> gfs 2 node tar	3m58.337s 	0m0.844s 	0m17.082s
> gfs 2 node sync	0m3.119s 	0m0.000s 	0m0.117s
> gfs 2 node tar	3m55.147s	0m0.911s	0m17.529s
> gfs 2 node sync	0m1.862s	0m0.001s	0m0.043s
> 
> 
> du -s linux-2.6.8.1 (after 1st mount)
> -----		real		user		sys
> ext3 		0m5.361s	0m0.039s	0m0.318s
> gfs 1 node	0m46.077s	0m0.097s	0m5.144s
> gfs 2 node	0m40.835s	0m0.069s	0m3.218s
> gfs 2 node	0m41.313s	0m0.089s	0m3.348s
> 
> Doing a 2nd du -s should be cached.  On ext3 is always
> seems to be.  On gfs the numbers vary quite a bit.
> 
> 2nd du -s 
> ---------
> ext3 		0m0.130s	0m0.028s 	0m0.101s
> gfs 1 node 	0m20.95s	0m0.075s	0m3.102s
> gfs 1 node	0m0.453s 	0m0.044s 	0m0.408s
> gfs 2 node	0m0.446s	0m0.046s	0m0.400s
> gfs 2 node 	0m0.456s 	0m0.028s	0m0.428s
> 
> rm -rf linux-2.6.8.1
> --------------------
> ext3		0m5.050s 	0m0.019s 	0m0.822s
> gfs 1 node	0m28.583s 	0m0.094s 	0m8.354s
> gfs 2 node	7m16.295s 	0m0.073s 	0m7.785s
> gfs 2 node	8m30.809s	0m0.086s 	0m7.759s
> 
> 
> Comment/questions:
> 
> Tar on gfs on 1 node is nearly 3x slower than ext3.
> Tar on 2 gfs nodes in parallel is showing reverse scaling:
> 	2 nodes take 4 minutes.
> 
> Is there some reason why sync is so fast on gfs?
> ext3 shows fast tar then long sync, gfs show long
> tar and fairly fast sync.
> 
> 1 time du is around 8 times slow than ext3.  This must the
> time in instantiate and acquire the DLM locks for the
> inodes.
> 
> Do you know the expected time to get instantiate and acquire a
> DLM lock?
> 
> rm is 6 times slower on gfs than ext3.  Reverse scaling
> on removes happening on 2 nodes in parallel.  These are
> in separate directories, so one would not expect DLM
> conflicts.
> 
> Thoughts?
> 
> Daniel
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> http://www.redhat.com/mailman/listinfo/linux-cluster




More information about the Linux-cluster mailing list