[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Cluster service restarting Locally



oh, thats good to hear :-)
Multiple lock_nolock nodes would be... interesting...

However, you are saying you want to compare the performance of GFS
with the performance of iSCSI.
GFS is a filesystem, iSCSI is a block level device.
May I ask how you intend to "compare" the performance of the two?

Erling

On 3/9/06, Hong Zheng <hong zheng wsdtx org> wrote:
> I understand no_lock won't work for multiple nodes, so I never mount GFS
> w/ no_lock to multiple nodes, our cluster is two-node active-passive
> cluster. So every time only active node has GFS mount. I could use iSCSI
> disk only, but just want to test if GFS has better performance than
> iSCSI.
>
> Hong
>
> -----Original Message-----
> From: linux-cluster-bounces redhat com
> [mailto:linux-cluster-bounces redhat com] On Behalf Of Erling Nygaard
> Sent: Thursday, March 09, 2006 3:52 PM
> To: linux clustering
> Subject: Re: [Linux-cluster] Cluster service restarting Locally
>
> I am sorry if this sounds a little harsh, but I'm not sure if laughing
> or crying is the correct reaction to this email.
>
> Let us get one thing straight.
> You are currently mounting a GFS filesystem _concurrently_ on multiple
> nodes using lock_nolock?
>
> If this is the case I can tell you that this will _not_ work. You
> _will_ corrupt your filesystem.
>
> Mounting a GFS filesystem with lock_nolock for all practical purposes
> turns the GFS filesystem into a local filesystem. There is _no_
> locking done anymore.
> With this setup there is no longer any coordination done among the
> nodes to control the filesystem access, so they are all going to step
> on each others toes.
> You might as well use ext3, the end result will be the same ;-)
>
> The purpose of lock_nolock is to (temporarily) be able to mount a GFS
> filesystem on a single node in such cases where the entire locking
> infrastructure is unavailable. (Something like a massive cluster
> failure)
>
> So you should really look into setting up one of the lock services :-)
>
> E.
>
>
>
>
>
>
> On 3/9/06, Hong Zheng <hong zheng wsdtx org> wrote:
> > Lon,
> >
> > Thanks for your reply. In my system I don't use any lock system like
> > lock_gulm or lock_dlm, I use no_lock because our applications'
> > limitation. Do you think no_lock will also bring some lock traffic or
> > not? When I tried lock_gulm before, our application had very bad
> > performance, so I choose no_lock.
> >
> > And I'm not sure which update we have right now. Do you know the
> > versions for clumanager and redhat-config-cluster of RHCS3U7?
> >
> > Hong
> >
> > -----Original Message-----
> > From: linux-cluster-bounces redhat com
> > [mailto:linux-cluster-bounces redhat com] On Behalf Of Lon Hohberger
> > Sent: Wednesday, March 08, 2006 4:52 PM
> > To: linux clustering
> > Subject: RE: [Linux-cluster] Cluster service restarting Locally
> >
> > On Mon, 2006-03-06 at 14:02 -0600, Hong Zheng wrote:
> > > I'm having the same problem. My system configuration is as follows:
> > >
> > > 2-node cluster: RH ES3, GFS6.0, clumanager-1.2.28-1 and
> > > redhat-config-cluster-1.0.8-1
> > >
> > > Kernel: 2.4.21-37.EL
> > >
> > > Linux-iscsi-3.6.3 initiator: connections to iSCSI shared storage
> > > server
> >
> > If it's not fixed in U7 (which I think it should be), please file a
> > bugzilla... It sounds like the lock traffic is getting
> network-starved.
> >
> > -- Lon
> >
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster redhat com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster redhat com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
>
>
> --
> -
> Mac OS X. Because making Unix user-friendly is easier than debugging
> Windows
>
> --
> Linux-cluster mailing list
> Linux-cluster redhat com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
> --
> Linux-cluster mailing list
> Linux-cluster redhat com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>


--
-
Mac OS X. Because making Unix user-friendly is easier than debugging Windows


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]