[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] gfs perfomance tuning



Some extra info:

GFS is mounted with noquota,noatime options

/proc/cluster/lock_dlm/drop_count = 200000

Mital



On Mon, 2007-10-08 at 19:05 -0700, Mital Patel wrote:
> Hi,
> 
> I am looking for suggestions on how to increase the speed of our GFS
> configuration.  We are using GFS on a cluster of 4 web servers with a
> EMC AX150i iscsi server backend.  Cluster/GFS is setup and working
> properly, however it seems to have horrible performance when multiple
> nodes attempt to create files in the same shared directory.  Our
> application require us to create many small files on the shared drive,
> which according to this site,
> http://kbase.redhat.com/faq/FAQ_78_3152.shtm is not the best environment
> to use GFS.  I am exploring alternatives, but if we can get GFS to a
> point where it's usable, we want to stick with it and hope GFS2
> increases the performance even more.
> 
> I have read through some previous threads and it seems like I can eek
> some performance by increasing the gfs_scand interval, is that correct?
> Are there any other settings that would help with performance?
> 
> System Info:
> Centos 4.5
> kernel 2.6.9-55.0.6.ELsmp
> GFS-kernel-smp-2.6.9-72.2.0.7
> GFS-6.1.14-0
> dlm-1.0.3-1
> dlm-kernel-smp-2.6.9-46.16.0.8
> 
> 
> gfs_tool gettune:
> ilimit1 = 100
> ilimit1_tries = 3
> ilimit1_min = 1
> ilimit2 = 500
> ilimit2_tries = 10
> ilimit2_min = 3
> demote_secs = 300
> incore_log_blocks = 1024
> jindex_refresh_secs = 60
> depend_secs = 60
> scand_secs = 5
> recoverd_secs = 60
> logd_secs = 1
> quotad_secs = 5
> inoded_secs = 15
> glock_purge = 0
> quota_simul_sync = 64
> quota_warn_period = 10
> atime_quantum = 3600
> quota_quantum = 60
> quota_scale = 1.0000   (1, 1)
> quota_enforce = 0
> quota_account = 0
> new_files_jdata = 0
> new_files_directio = 0
> max_atomic_write = 4194304
> max_readahead = 262144
> lockdump_size = 131072
> stall_secs = 600
> complain_secs = 10
> reclaim_limit = 5000
> entries_per_readdir = 32
> prefetch_secs = 10
> statfs_slots = 64
> max_mhc = 10000
> greedy_default = 100
> greedy_quantum = 25
> greedy_max = 250
> rgrp_try_threshold = 100
> statfs_fast = 0
> 
> 
> 
> 
> 
> Mital
> 
> --
> Linux-cluster mailing list
> Linux-cluster redhat com
> https://www.redhat.com/mailman/listinfo/linux-cluster
-- 
Mital Patel
Systems Administrator
Tiny Prints Inc


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]