[Linux-cluster] oprofile for tar/rm tests

Cahill, Ben M ben.m.cahill at intel.com
Fri Oct 22 06:53:20 UTC 2004


Meant to mention, this was CVS code from 9/23/04.

-- Ben --

Opinions are mine, not Intel's 

> -----Original Message-----
> From: linux-cluster-bounces at redhat.com 
> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Cahill, Ben M
> Sent: Friday, October 22, 2004 2:44 AM
> To: linux-cluster at redhat.com
> Subject: [Linux-cluster] oprofile for tar/rm tests
> 
> Hi all,
> 
> I ran the oprofile utility while doing the same sort of 
> tar/rm tests that Daniel McNeil and others have been running 
> (although I didn't do the sync).  oprofile periodically takes 
> a sample of the CPU instruction pointer to figure out how the 
> CPU is spending its time.
> 
> This was on a single node, using FC JBOD, single physical 
> disk, no volume manager, nolock, on 1 GHz dual Xeon, 1 GByte.
> 
> Average realtime, the tar takes about 46 seconds, the rm -rf 
> about 26 seconds, when repeatedly cycling between the two.
> 
> Attached is a result files, grepped to show only the gfs 
> calls, for the tar.  I'll send the one for rm in a separate 
> mail, to try to stay under the list's mail size filter limit.
> 
> Hot spots for tar are:
> 
> gfs_dpin
> gfs_glock_dq
> glock_wait_internal
> gfs_holder_init
> gfs_glock_nq
> 
> Hot spots for rm are:
> 
> gfs_dpin
> gfs_ail_empty
> gfs_unlinked_get
> do_strip
> gfs_glock_dq
> 
> If you use the oprofile tool, don't make the mistake that I 
> did of mounting gfs on the "/gfs" mountpoint.  opreport 
> looked there first to find the "gfs" module for symbols 
> (oops, bad format)!
> 
> Sequence I followed:
> 
> cd /gfsmount
> opcontrol --start
> cp /path/to/linux-2.6.7.tar.gz .
> tar -xvzf linux-2.6.7.tar.gz
> opcontrol --shutdown
> opreport -lp /lib/modules/2.6.8.1/kernel > report
> 
> similar for rm -rf .
> 
> between oprofile runs, to erase old results, do:
> 
> opcontrol --reset
> 
> -- Ben --
> 
> Opinions are mine, not Intel's
> 
> 
> 




More information about the Linux-cluster mailing list