[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] strange slowness of ls with 1 newly created file on gfs 1 or 2



Hello,

i am testing gfs and its very slow, please look at this if it is normal
or i miss something

i have 2 node cluster, nodes are connected via SAS to disk array promise
e310s, when i run dd on attached block device i have cca 150MBps
throughput on both nodes
there is debian etch, i compile cluster-2.00.00 with gfs1 module
i create one 475GB logical volume (i dont use clvmd, just normal lvm),
create gfs1 on it
gfs_mkfs -t cluster1:data0 -p lock_dlm -j 2 /dev/vgdata0/lvdata0
mount that lv on both nodes to directory /d/0/, run df on both nodes
and then run touch on node 1:
serpico# touch /d/0/test

and ls on node 2:
dinorscio:~# time ls /d/0/
test

real    0m9.486s
user    0m0.000s
sys     0m0.004s

it took almost 10 seconds to display 1 file on that filesystem
when i again create other file via touch(node1) and run ls (node2) it
took again cca 10 seconds
i monitor activity with dstat and there is 50% iowait on node where run
ls (50% because 2 core cpu on node), but no disk activity
nodes are connected via 1gbps idle ethernet

and when ls is runing, i look at wchan with ps
ps axf -o pid,wchan:20,cmd|grep ls
6387 sync_buffer                               \_ ls --color=auto /d/0/
i run ps many times there is still sync_buffer, i dont see other kernel
function

this is my cluster.conf
<?xml version="1.0"?>
<cluster name="cluster1" config_version="20">
        <cman expected_votes="1" two_node="1" />
        <clusternodes>
                <clusternode name="dinorscio" votes="1" nodeid="1">
                        <fence>
                        </fence>
                </clusternode>
                <clusternode name="serpico" votes="1" nodeid="2">
                        <fence>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
        </fencedevices>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>

and last thing, i try gfs2, but same result

Thank you
--
Pavel Stano


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]