[Linux-cluster] gfs and cluster nodes rebooting

Muhammad Ammad Shah mammadshah at hotmail.com
Mon Jan 25 12:27:53 UTC 2010



HI,



I have configured two node cluster and its working fine for SAN (ext3 file system). after this i configured GFS using following.



root# pvcreate  /dev/sdb

root#vgcreate -c y vg1_gfs /dev/sdc1

root#lvcreate -n db_store -l 100%FREE vg1_gfs

root#/etc/init.d/clvmd start



 Started on both nodes.



root#mkfs -t gfs2 -p lock_dlm -t db_clust:db_store -j 4 /dev/vg1_gfs/db_store

root# service gfs start



root#chkconfig --level 345 clvmd on 

root#chkconfig --level 345 gfs on



----------------

the problem is, as i changed File system (ex3) resource to GFS Resource.



nodes are rebooting..



there is nothing in /var/log/messages. but when i checked console of the node there was some message related to GFS. 

DLM id:0 ...



so i removed GFS and switched back to File system(ext3) resource. 



can i install oracle on  Resource File system(ext3) ?



or how to troubleshoot GFS reboot..



need help,		
			 
			  
				
			
 
Thanks,
Muhammad Ammad Shah
 


 		 	   		  
_________________________________________________________________
Windows Live: Make it easier for your friends to see what you’re up to on Facebook.
http://www.microsoft.com/middleeast/windows/windowslive/see-it-in-action/social-network-basics.aspx?ocid=PID23461::T:WLMTAGL:ON:WL:en-xm:SI_SB_2:092009
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20100125/f29c2418/attachment.htm>


More information about the Linux-cluster mailing list