[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

RE: [Linux-cluster] (was: clvmd without GFS?) Effect of atime updates



Hi Matt,

Ken Preslan just checked in some code to help this situation, i.e.
taking a *long* time when reading a large directory with atime updates
enabled (i.e. GFS mounted normally, without -o noatime option).

Instead of forcing a WAIT until write I/O completes for each atime
update (writing the inode block for each file that gets a new atime ...
that's a lot of block writes in your case), GFS now will WAIT *only* if
another node or process needs to access the file (hopefully rare).  This
should allow Linux block I/O to write these blocks more efficiently, and
not hold up the read process.

I'm curious how much of a difference that might make in your situation.
If you (or anyone else) can try it out, let us know the results.

-- Ben --

Opinions are mine, not Intel's

> -----Original Message-----
> From: linux-cluster-bounces redhat com 
> [mailto:linux-cluster-bounces redhat com] On Behalf Of Matt Mitchell
> Sent: Thursday, October 28, 2004 5:39 PM
> To: linux clistering
> Subject: Re: [Linux-cluster] clvmd without GFS?
> 
> Cahill, Ben M wrote:
> > 
> > I'd be curious to know if it makes a difference if you 
> mount using -o 
> > noatime option (see man mount)?  The default access-time update 
> > threshold for GFS is 3600 seconds (1 hour).  This can cause 
> a bunch of 
> > write transactions to happen, even if you're just doing a read 
> > operation such as ls.  Since your exercise is taking over an hour, 
> > this might be thrashing the atime updates, but I don't know 
> how much 
> > that might be adding.
> 
> Looks like that is the case for the second time slowness; it 
> also probably explains why the directory seemed relatively 
> snappy immediately after I finished populating it but not at 
> all the next morning.
> 
> hudson:/mnt/xs_media# mount
> [ snipped ]
> /dev/mapper/xs_gfsvg-xs_media on /mnt/xs_media type gfs 
> (rw,noatime) hudson:/mnt/xs_media# time sh -c 'ls 
> 100032/mls/fmls_stills | wc -l'
> 298407
> 
> real    74m15.781s
> user    0m5.546s
> sys     0m40.529s
> hudson:/mnt/xs_media# time sh -c 'ls 100032/mls/fmls_stills | wc -l'
> 298407
> 
> real    3m37.052s
> user    0m5.502s
> sys     0m12.643s
> 
> For sake of comparison, here is the same thing after 
> unmounting both nodes and remounting only hudson, again with 
> noatime (so it is not touching any disk blocks):
> hudson:/mnt/xs_media# time sh -c 'ls 100032/mls/fmls_stills | wc -l'
> 298407
> 
> real    3m59.533s
> user    0m5.501s
> sys     0m51.741s
> 
> (Now I am trying to unmount the partition again, and it's hanging.)
> 
> So it is definitely dlm behaving badly...?

No, DLM is not causing the delay ... it's just a lot of disk writes ...
with most of them causing a WAIT until the disk is written.

-- Ben --

Opinions are mine, not Intel's

> 
> -m
> 
> --
> Linux-cluster mailing list
> Linux-cluster redhat com
> http://www.redhat.com/mailman/listinfo/linux-cluster
> 


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]