[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] GFS continue to reboot nodes



can you explain this (remove qdisk), do i use 3 journals  or  8 journals.

really gr8 help by you.


 
Thanks,
Muhammad Ammad Shah
 





Date: Mon, 25 Jan 2010 19:04:57 +0530
Subject: Re: GFS continue to reboot nodes
From: rajatjpatel gmail com
To: mammadshah hotmail com
CC: linux-cluster redhat com

can remove qdisk and try it out it will work
Regards,

Rajat J Patel
D 803 Royal Classic
Link Road
Andheri West
Mumbai 53
+919920121211
www.taashee.com

FIRST THEY IGNORE YOU...
THEN THEY LAUGH AT YOU...
THEN THEY FIGHT YOU...
THEN YOU WIN...


On Mon, Jan 25, 2010 at 7:03 PM, Muhammad Ammad Shah <mammadshah hotmail com> wrote:

Dear Rajat,

According to Red Hat GFS, journals are equal to number of nodes in cluster, and according to my understanding it cant be increased later on if i want to add more nodes in cluster, am i right ?

you set it to 3, 2 for cluster nodes and 1 for quorum ? kindly let me know if  i am wrong.



 
Thanks,
Muhammad Ammad Shah
 





Date: Mon, 25 Jan 2010 18:50:29 +0530
Subject: Re: GFS continue to reboot nodes
From: rajatjpatel gmail com
To: mammadshah hotmail com
CC: linux-cluster redhat com


root#mkfs -t gfs2 -p lock_dlm -t db_clust:db_store -j 4 /dev/vg1_gfs/db_store

root#mkfs -t gfs2 -p lock_dlm -t db_clust:db_store -j 3 /dev/vg1_gfs/db_store
Regards,

Rajat J Patel
D 803 Royal Classic
Link Road
Andheri West
Mumbai 53
+919920121211
www.taashee.com

FIRST THEY IGNORE YOU...
THEN THEY LAUGH AT YOU...
THEN THEY FIGHT YOU...
THEN YOU WIN...


On Mon, Jan 25, 2010 at 6:24 PM, Muhammad Ammad Shah <mammadshah hotmail com> wrote:

Dear Rajat,

HI,

I have configured two node cluster and its working fine for SAN (ext3 file system). after this i configured GFS using following.

root# pvcreate /dev/sdb
root#vgcreate -c y vg1_gfs /dev/sdc1
root#lvcreate -n db_store -l 100%FREE vg1_gfs
root#/etc/init.d/clvmd start

Started on both nodes.

root#mkfs -t gfs2 -p lock_dlm -t db_clust:db_store -j 4 /dev/vg1_gfs/db_store
root# service gfs start

root#chkconfig --level 345 clvmd on
root#chkconfig --level 345 gfs on

----------------
the problem is, as i changed File system (ex3) resource to GFS Resource.

nodes are rebooting..

there is nothing in /var/log/messages. but when i checked console of the node there was some message related to GFS.
DLM id:0 ...

so i removed GFS and switched back to File system(ext3) resource.

can i install oracle on Resource File system(ext3) ?

or how to troubleshoot GFS reboot..
need help,




 
Thanks,
Muhammad Ammad Shah



Windows Live Hotmail: Your friends can get your Facebook updates, right from HotmailĀ®.



Windows Live: Friends get your Flickr, Yelp, and Digg updates when they e-mail you.

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]