[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] Slowness above 500 RRDs



Hi,

I'm trying out GFS.  It would be a single writer / multiple readers
scenario, where one node continuously updates thousands of RRD files,
while the others only read them.  It the writer fails, a reades
assumes its role of updating the files.  I'm doing benchmarks now.

There's a good bunch of RRDs in a directory.  A script scans them for
their last modification times, and then updates each in turn for a
couple of times.  The number of files scanned and the length of the
update rounds are printed.  The results are much different for 500 and
501 files:

filecount=501
  iteration=0 elapsed time=10.425568 s
  iteration=1 elapsed time= 9.766178 s
  iteration=2 elapsed time=20.14514 s
  iteration=3 elapsed time= 2.991397 s
  iteration=4 elapsed time=20.496422 s
total elapsed time=63.824705 s

filecount=500
  iteration=0 elapsed time=6.560811 s
  iteration=1 elapsed time=0.229375 s
  iteration=2 elapsed time=0.202973 s
  iteration=3 elapsed time=0.203439 s
  iteration=4 elapsed time=0.203095 s
total elapsed time=7.399693 s

The files fit in the buffer cache conveniently, each being ~ 50 kB.
I'm using cluster version 1.03, Linux 2.6.18 with 1 GB of memory.  The
GFS is 40 GB, one other node keeps it mounted, but without any
activity.  The test node exercises librrd2 exclusively (besides the
usual daemons, nothing special).

The library issues an fcntl F_WRLCK before updating a file, and
according to strace, this takes much time.  Changing the library to
use flock() instead gives marginal speedup.  Removing the locking
altogether makes all the difference, bringing GFS to about half the
speed of XFS in the same setting (which runs an iteration in 0.03
seconds both for 500 and 501 files).

So, I'd like to be able to work with around 10000 files with good
performance.  Is there anything I could tune?  Removing the locking
altogether doesn't sound like the best idea.

Mounting with noatime or zeroing /proc/cluster/lock_dlm/drop_count
before mount didn't help at all.

I'd be grateful for any advice, and hope all relevant information is
here.
-- 
Thanks,
Feri.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]