[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] which is better gfs2 and ocfs2?

On 12/03/11 17:46, Jeff Sturm wrote:
	[root cluster1 76]# ls | wc -l

The key is that only a few locks are needed to list the directory:

You assume NFS clients are simply using "ls"

Running "ls -l" on the same directory takes a bit longer (by a factor of
about 20):

Or more. Try it with 256, 512, 1024 and 4096 files in the directory

Then try it with 16k files, 32k, 64k and 128k

Yes, users do have directories this large.

> For better or worse, "ls -l" (or equivalently, the aliased "ls
--color=tty" for Red Hat users) is a very common operation for
interactive users, and such users often have an immediate negative
reaction to using GFS as a consequence.

Those users are paying for GFS installations. They have every right to criticize its shockingly poor performance for these operations, especially when it adversely impacts their ability to get work done.

In addition the same problem appears every time a backup is run - even incrementals need to stat each file in order to find out what's changed. Having a 2million file filesystem take 28 hours to run an incremental vs 10 minutes for the same thing on ext3/4 doesn't go down at all well.

What you've said is right, but also comes across to the average academic as condescending - which is a fast way of further alienating them.

As far as most users are concerned, a computer is a black box. You put files in, you get files out. If it's shockingly slow it's _not_ their problem, it's the problem of whoever installed it - and it doesn't help that GFS has been sold as production-ready when it's only useful in a limited range of filesystem activities.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]