[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] GFS on 2.6.8.1



Hey all,

I finally got my 3-node fibre channel cluster up and running
on 2.6.8.1 and the latest GFS bits.

My setup:

3 machines each:
        2 processor (800 MHZ Pentium 3)
        512MB of memory
        2 100Mb ethernet (1 public, 1 private)
        1 2-port Qlogic FC host adapter
2 F/C sitches cascaded together
1 - 10 disk - dual controller FASTT200 (36GB 10,000rpm drives)

I am just starting to test and did a quick untar test to
see approximate performance of gfs compared to ext3.
I only have gfs mounted on 1 node for this test. 
Here are the results:

The command run was 'time tar xf /Views/linux-2.6.8.1.tar'
where /Views is an NFS mounted file system and the current
working directory is in a clean file system on a single
disk drive.

			real		user		sys
ext3 data=ordered	0m16.962s	0m0.552s	0m6.529s	
ext3 data=journal	0m39.599s	0m0.501s	0m5.856s		
gfs 1-node mounted	1m23.849s	0m0.890s	0m17.991s


The 2nd test was removing the files (time rm -rf linux-2.6.8.1/)

			real		user		sys
ext3 data=ordered	0m1.225s	0m0.021s	0m1.048s
ext3 data=journal	0m1.286s	0m0.024s	0m1.038s
gfs 1-node mounted	0m49.565s	0m0.094s	0m8.191s


Questions:

1. Is GFS doing the equivalent of data=journal?

    If it is, it is twice as slow as ext3 data=journal.
    Is this expected?

2.  What is going on in the remove that is taking so long?

    With only 1 node mounting a gfs file system (3 node cluster)
    I should master all the locks.

3.  If I re-ran the tar test after the remove (without umount/mount),
     the tar times were basically the same.  I would have expected
     GFS to cache the free inodes and have a faster 2nd tar time.
     When does GFS stop caching the inode (and the dlm locks on the
     inodes?)

Now, on to more testing!

Thanks,

Daniel	



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]