[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] gfs2_tool settune demote_secs



If in gfs2 glocks are purged based upon memory constraints, what happens if it is run on a box with large amounts of memory? i.e. RHEL5.x with 128gb ram?  We ended up having to move away from GFS2 due to serious performance issues with this exact setup, and our performance issues were largely centered around commands like ls or rm against gfs2 filesystems with large directory structures and millions of files in them.

In our case, something as simple as copying a whole filesystem to another filesystem would cause a load avg of 50 or more, and would take 8+ hours to complete.  The same thing on NFS or ext3 would take usually 1 to 2 hours.  Netbackup of 10 of those filesystems took ~40 hours to complete, so we were getting maybe 1 good backup per week, and in some cases the backup itself caused cluster crash.

We are still using our GFS1 clusters, since as long as their network is stable, their performance is very good, but we are phasing out most of our GFS2 clusters to NFS instead.

On Fri, Oct 9, 2009 at 1:01 PM, Steven Whitehouse <swhiteho redhat com> wrote:
Hi,

On Fri, 2009-10-09 at 09:55 -0700, Scooter Morris wrote:
> Hi all,
>     On RHEL 5.3/5.4(?) we had changed the value of demote_secs to
> significantly improve the performance of our gfs2 filesystem for certain
> tasks (notably rm -r on large directories).  I recently noticed that
> that tuning value is no longer available (part of a recent update, or
> part of 5.4?).  Can someone tell me what, if anything replaces this?  Is
> it now a mount option, or is there some other way to tune this value?
>
> Thanks in advance.
>
> -- scooter
>
> --
> Linux-cluster mailing list
> Linux-cluster redhat com
> https://www.redhat.com/mailman/listinfo/linux-cluster

Nothing replaces it. The glocks are disposed of automatically on an LRU
basis when there is enough memory pressure to require it. You can alter
the amount of memory pressure on the VFS caches (including the glocks)
but not specifically the glocks themselves.

The idea is that is should be self-tuning now, adjusting itself to the
conditions prevailing at the time. If there are any remaining
performance issues though, we'd like to know so that they can be
addressed,

Steve.


--
Linux-cluster mailing list
Linux-cluster redhat com
https://www.redhat.com/mailman/listinfo/linux-cluster


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]