[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] clvmd without GFS?



Just seeking some opinions here...

I have observed some really poor performance in GFS when dealing with large numbers of small files. It seems to be designed to scale well with respect to throughput, at the apparent expense of metadata and directory operations, which are really slow. For example, in a directory with 100,000 4k files (roughly) a simple 'ls -l' with lock_dlm took over three hours to complete on our test setup with no contention (and only one machine had the disk mounted at all). (Using Debian packages dated 16 September 2004.)

So in the effort to get something set up quickly I think I am going to try setting up a regular filesystem on top of LVM2. Since these disks are shared, we'll at least have a warm-failover capability if the server machine goes down. (If I have the energy I will set up cluster failover to take care of that automatically too.) Question I have is, how does clvmd impact this? Without it I know you have to force LVM2 to read the VG information off of disk when moving partitions between nodes; does clvmd make this step unnecessary? Also, is it possible to reliably mount read-only a snapshot of a LV still mounted read-write on the other node? (It seems like it should be.)

I'd appreciate insights from anyone who has any experience with this kind of setup.


TIA


-m


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]