[Linux-cluster] Fence methods

Chip Burke CBurke at innova-partners.com
Fri Sep 7 16:17:52 UTC 2012


Ok, there was the hole in my testing. I expected fence_scsi to prevent
writes AND reads. So reads were continuing to work which threw me off. But
as to the rest of your explanation, that is indeed how I have things
configured and you were most helpful.

Thanks!
________________________________________
Chip Burke





On 9/6/12 5:37 PM, "Ryan O'Hara" <rohara at redhat.com> wrote:

>On 09/06/2012 03:45 PM, Chip Burke wrote:
>> Now that ricci is figured out, I am having some issues with fencing.
>>
>> It seems VMWare Fence works very well, but our GFS2 volume is not
>> available until it receives a "success" status.  This gives us maybe
>> 30-60 seconds of time where we cannot access the GFS2 volumes which
>> equates to downtime. SCSI Fencing seems faster, but very unreliable.
>> If I try to fence a node, it will return "fence somenode success".
>> Great. But the node can still access the GFS2 volume.
>
>Are you absolutely sure that your array supports SCSI-3 persistent
>reservations? When you start cman, you should see that unfencing occurs.
>If successful, the devices that comprise your GFS2 volume should have
>one WERO (type 5) reservation and one or more registrations.
>Can you use sg_persist to verify this? Better yet, use the logfile
>option for fence_scsi:
>
><fencedevice agent="fence_scsi" name="SCSI_Fence" \
>  logfile="/tmp/fence_scsi.log"/>
>
>This logfile should show you what is happening when either unfencing or
>fencing occur.
>
>Also, when you say you can "access" a GFS2 volume after fencing so you
>mean you can write to this volume? If fence_scsi is working correctly,
>that should not be possible. How exactly are you accessing the volume
>after fencing?
>
>> Then I am also seeing conflicting information on using Qdisk with
>> fence_scsi as it seems to be a no-no. I could swear I saw a note
>> somewhere that Qdisk and fence_scsi worked together in newer versions
>> of RHEL.
>
>Can you direct me to the conflicting information? As long as your quorum
>device is not subject to SCSI-3 persistent reservations it should work.
>In your case, this means your quorum device must not belong to a cluster
>volume.
>
>Ryan
>
>--
>Linux-cluster mailing list
>Linux-cluster at redhat.com
>https://www.redhat.com/mailman/listinfo/linux-cluster





More information about the Linux-cluster mailing list