[Linux-cluster] GFS on 2.6.8.1 more simple performance numbers

Wim Coekaerts wim.coekaerts at oracle.com
Sat Oct 16 04:27:57 UTC 2004


untar tree from local filesystem to a fibrechannel disk, same partition
reused for ext3/ocfs2 also on 2.6.8, 4way xeon 2.6gz 6gb emulex Fc
controller

ocfs2 (default journal size)
untar real    0m15.826s user    0m2.429s sys     0m2.901s
sync real    0m11.248s user    0m0.002s sys     0m0.223s
du -s (first mount) real    0m15.008s user    0m0.030s sys     0m0.423s
du -s (next) real    0m0.075s user    0m0.015s sys     0m0.060s
rm -rf real    0m27.124s user    0m0.020s sys     0m3.853s

ext3 (same partition reformatted) 
untar real    0m14.054s user    0m2.310s sys     0m2.685s
sync real    0m13.187s user    0m0.000s sys     0m0.317s
du -s first mount real    0m3.620s user    0m0.024s sys     0m0.122s
du -s second real    0m0.066s user    0m0.016s sys     0m0.050s
rm -rf real    0m6.846s user    0m0.010s sys     0m0.509s

ext3 afaik does readahead on readdir which we don't, I doubt gfs does ?
and journal size differences probably are why du and rm are different
and then sync. I am pretty sure readahead is going to help a lot.

I don't have multinode data yet, will post that as well. 

we should post our dlm and nm stuff once it's in workable shape, so you
can have a look at what is useful there. ultimately we'd like to use
whatever will get into mainline kernel,. our stuff is quite simple but
seems to meet the needs and allows for easy configuration and rootfs
usage, so maybe it can be of use here.

Wim

> 
> Tar
> ---		real		user		sys
> ext3 tar	0m6.535s	0m0.429s	0m4.010s
> ext3 sync	0m21.953s	0m0.000s	0m0.574s
> 	
> gfs 1 node tar 	1m15.286s 	0m0.787s	0m17.085s
> gfs 1 node sync	0m7.734s 	0m0.000s 	0m0.190s
> 
> gfs 2 node tar	3m58.337s 	0m0.844s 	0m17.082s
> gfs 2 node sync	0m3.119s 	0m0.000s 	0m0.117s
> gfs 2 node tar	3m55.147s	0m0.911s	0m17.529s
> gfs 2 node sync	0m1.862s	0m0.001s	0m0.043s
> 
> 
> du -s linux-2.6.8.1 (after 1st mount)
> -----		real		user		sys
> ext3 		0m5.361s	0m0.039s	0m0.318s
> gfs 1 node	0m46.077s	0m0.097s	0m5.144s
> gfs 2 node	0m40.835s	0m0.069s	0m3.218s
> gfs 2 node	0m41.313s	0m0.089s	0m3.348s
> 
> Doing a 2nd du -s should be cached.  On ext3 is always
> seems to be.  On gfs the numbers vary quite a bit.
> 
> 2nd du -s 
> ---------
> ext3 		0m0.130s	0m0.028s 	0m0.101s
> gfs 1 node 	0m20.95s	0m0.075s	0m3.102s
> gfs 1 node	0m0.453s 	0m0.044s 	0m0.408s
> gfs 2 node	0m0.446s	0m0.046s	0m0.400s
> gfs 2 node 	0m0.456s 	0m0.028s	0m0.428s
> 
> rm -rf linux-2.6.8.1
> --------------------
> ext3		0m5.050s 	0m0.019s 	0m0.822s
> gfs 1 node	0m28.583s 	0m0.094s 	0m8.354s
> gfs 2 node	7m16.295s 	0m0.073s 	0m7.785s
> gfs 2 node	8m30.809s	0m0.086s 	0m7.759s
> 
> 
> Comment/questions:
> 
> Tar on gfs on 1 node is nearly 3x slower than ext3.
> Tar on 2 gfs nodes in parallel is showing reverse scaling:
> 	2 nodes take 4 minutes.
> 
> Is there some reason why sync is so fast on gfs?
> ext3 shows fast tar then long sync, gfs show long
> tar and fairly fast sync.
> 
> 1 time du is around 8 times slow than ext3.  This must the
> time in instantiate and acquire the DLM locks for the
> inodes.
> 
> Do you know the expected time to get instantiate and acquire a
> DLM lock?
> 
> rm is 6 times slower on gfs than ext3.  Reverse scaling
> on removes happening on 2 nodes in parallel.  These are
> in separate directories, so one would not expect DLM
> conflicts.
> 
> Thoughts?
> 
> Daniel
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> http://www.redhat.com/mailman/listinfo/linux-cluster




More information about the Linux-cluster mailing list