[Linux-cluster] GFS performance issue on CentOS 5.5

Steven Whitehouse swhiteho at redhat.com
Mon Jul 26 09:23:26 UTC 2010


Hi,

On Fri, 2010-07-23 at 13:52 -0400, Fred Wittekind wrote:
> Does manually tuning glock trimming still apply to GFS2?
> 
No, it is done automatically according to memory pressure on an LRU
basis.

> I have seen references that have talked about increasing resource
> groups, and also resources saying that if increased to high it could
> hurt performance.  From what I can figure out looks like the only way to
> change this is a reformat, is this correct?  I am not opposed to doing
> this, as this cluster is not yet in production, although, reformating,
> and restoring all the content to the file system will take a pretty
> decent amount of time, so I don't really want to do it to many times. 
> Would increasing the resource groups be a worth while thing to try, and
> if so, is there any good way to figure out (short of trial and error)
> what value I should use for the resource group allocation?
> 
You can, depending on workload, get better performance by altering the
balance between resource group size and the number of resource groups.
This usually takes some experimentation due to differences in hardware &
workload in any particular situation. As you say, it does require a
reformat, but mkfs.gfs2 usually isn't too far wrong so the defaults are
usually reasonably good anyway.

> I found this tread from a search using a search links I found in one of
> the threads you gave me a link to:
> http://www.mail-archive.com/linux-cluster@redhat.com/msg00990.html
> 
This doesn't really apply to gfs2. That deamon is gone and the cpu time
used is much reduced as a consequence,

Steve.

> On 7/23/2010 11:15 AM, Gordan Bobic wrote:
> > Sounds like you are suffering from extreme lock bouncing between the
> > nodes. This is a FAQ. I suggest you have a read through the mail
> > archives of this list for similar discussions, e.g.:
> >
> > http://www.mail-archive.com/linux-cluster@redhat.com/msg04412.html
> > http://www.mail-archive.com/linux-cluster@redhat.com/msg08177.html
> >
> > Gordan
> >
> > On Thu, 2010-07-22 at 11:55 -0400, Fred Wittekind wrote:
> >> Two web servers, both virtualized with CentOS Xen servers as host
> >> (residing on two different physical servers). 
> >> GFS used to store home directories containing web document roots.
> >>
> >> Shared block device used by GFS is an ISCSI target with the ISCSI
> >> initiator residing on the Dom-0, and presented to Dom-U webservers as
> >> drives.
> >> Also, providing a second shared block device for quorum disk.
> >>
> >> If I hit the web site on just one of the nodes, it behaves as expected. 
> >> If I try to load the websites from both nodes at the same time (two web
> >> browser instances), the load average on both nodes spikes, and page
> >> loads very slowly.  The site I am trying to host is very high traffic,
> >> and if the servers can be nearly brought to there knees with two web
> >> browser instances running on a single workstation, that's not going to work.
> >>
> >> I am not seeing any error messages in the logs regarding the cluster.
> >>
> >> Any help or suggestions on how to troubleshoot this issue would be
> >> greatly appreciated.
> >>
> >>
> >> [root at www3 www]# find ./ | wc -l
> >> 64815
> >>
> >> [root at www3 www]# gfs2_tool df /home/www
> >> /home/www:
> >>   SB lock proto = "lock_dlm"
> >>   SB lock table = "Web:homewww"
> >>   SB ondisk format = 1801
> >>   SB multihost format = 1900
> >>   Block size = 4096
> >>   Journals = 2
> >>   Resource Groups = 316
> >>   Mounted lock proto = "lock_dlm"
> >>   Mounted lock table = "Web:homewww"
> >>   Mounted host data = "jid=1:id=393217:first=0"
> >>   Journal number = 1
> >>   Lock module flags = 0
> >>   Local flocks = FALSE
> >>   Local caching = FALSE
> >>
> >>   Type           Total Blocks   Used Blocks    Free Blocks    use%
> >>   ------------------------------------------------------------------------
> >>   data           20707148       4347237        16359911       21%
> >>   inodes         16426386       66475          16359911       0%
> >>
> >> [root at www3 www]# gfs2_tool gettune /home/www
> >> new_files_directio = 0
> >> new_files_jdata = 0
> >> quota_scale = 1.0000   (1, 1)
> >> logd_secs = 1
> >> recoverd_secs = 60
> >> statfs_quantum = 30
> >> stall_secs = 600
> >> quota_cache_secs = 300
> >> quota_simul_sync = 64
> >> statfs_slow = 0
> >> complain_secs = 10
> >> max_readahead = 262144
> >> quota_quantum = 60
> >> quota_warn_period = 10
> >> jindex_refresh_secs = 60
> >> log_flush_secs = 60
> >> incore_log_blocks = 1024
> >>
> >> [root at www3 www]# cat /etc/cluster/cluster.conf |egrep '(dlm)|(gfs)'
> >>         <dlm plock_ownership="1" plock_rate_limit="0"/>
> >>         <gfs_controld plock_rate_limit="0"/>
> >>
> >>
> >> Fred Wittekind
> >>
> >> --
> >> Linux-cluster mailing list
> >> Linux-cluster at redhat.com
> >> https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster





More information about the Linux-cluster mailing list