[Date Prev][Date Next] [Thread Prev][Thread Next]
Re: [Linux-cluster] Software Iscsi with Redhat Cluster
- From: "Ryan Thomson" <thomsonr ucalgary ca>
- To: "linux clustering" <linux-cluster redhat com>
- Subject: Re: [Linux-cluster] Software Iscsi with Redhat Cluster
- Date: Wed, 11 Jan 2006 01:46:45 -0700 (MST)
I'm mounting iSCSI targets on my cluster nodes. The iSCSI target machines
are not cluster nodes but they do exist on the same private subnet as the
cluster nodes. I then put my iSCSI mounts into a cluster aware volume
group and create volumes inside that volume group. I then format my
volumes with GFS and mount them on the appropriate nodes. Some volumes are
mounted on all nodes, some are not.
I'm doing it this way because I find it *much* easier to manage my volumes
from the cluster nodes instead of the independent storage devices. I
realize clvmd isn't for everyone though.
Either way, my experience with the major Linux software iSCSI drivers is
pretty good. I assume you get much better performance with high end disk
if you use an iSCSI HBA instead of loading your server CPU but it works
for me at near local speeds of my 3ware SATA RAID5 disk servers.
The nice thing about this whole setup is I *can* use FC or GNBD or
Infiniband->(I think, dont quote me) later if I want as anything I can
mount on the cluster nodes as a block device can be utilized as shared
storage. For us, this was a "selling" point for RHCS/GFS. We had disk we
wanted to include in a SAN environment with the ability to add any kind of
backend storage we want. RHCS/GFS delivered.
> Thanks for the reply,
> So i take it that you are exporting LVM volume groups? i'm tring to
> avoid placing many services on the cluster so avoiding CLVM is a big one
> for me. I was planning on exporting the Logical Volumes, about 28 or so
> in total via iSCSI.
> The reason for avoiding Cluster Suite for anymore than GFS is that with
> XEN (3 real servers, 9 virtual ones) that the cluster can stop
> functioning correctly if all the virtual servers go down but the
> physical ones remain working fine. I'm yet to find a way to stop the
> virtual servers bringing down the whole cluster - this may be possable
> but i'm rather new to RHCS :)
> Ryan Thomson wrote:
>>I'm just about to put a completely Linux based software iSCSI RedHat
>>Cluster with GFS into production. We have four RHEL4AS machines acting
>>cluster nodes and an 8TB RHEL4AS server is exporting disk as two
>>arrays/iSCSI targets to the cluster nodes. Several more storage boxes we
>>already own (previously Linux servers exporting large arrays over NFS)
>>to be added as iSCSI targets.
>>I am using the iSCSI initiator that comes with RHEL4U2, the cisco open
>>source one it is I believe. For targets, I'm using the iSCSI Enterprise
>>Target (http://sourceforge.net/projects/iscsitarget/). The only thing I
>>found so far that doesn't seem to work is the iSCSI alias. I can't seem
>>get the alias I set in the target to show up on the initiator. I don't
>>know if the problem is the target or the initiator as I haven't found
>>anything online yet about this issue. The currently available Linux
>>software seems to work pretty much flawlessly for me otherwise.
>>So far it's been quite easy and painless setting up CLVM volumes and
>>putting GFS on them, I even wrote a basic wrapper script to do all the
>>work for me, streamlining the proceedure. Filesystem expansion seems to
>>work as expected. I haven't played with snapshots.
>>Initial numbers show transfer rates from end to end (NFS clients to
>>Cluster NFS server to GFS) to be better for iSCSI than GNBD. Keep in
>>these are initial tests using bonnie++ and using 'time' to time file
>>copies of various sizes, nothing concrete. I suspected NFS to be a
>>bottleneck but it seems that storage interconnect/fabric protocol still
>>makes a difference even with NFS being crappy to the clients.
>>>From cluster nodes to storage I found transfer rates to be near local
>>with iSCSI, again don't trust me though, do you own tests. My hardware
>>doesn't have very high end disk, just SATA with 3ware 9500 cards. I
>>do the cluster node to storage test with GNBD :(
>>Anyways, so far my initial experience has been great. I solved an issue
>>causing my cluster nodes to kernel panic and ever since, it's been
>>very well serving Apache, MySQL, OpenLDAP and NFS exports. I haven't
>>stress tested it yet as I don't have a workable means to do so right
>>besides migrating users over slowly.
>>I have zero experience with Xen so I can't help you there.
>>I hope that helps.
>>University Of Calgary Biocomputing
>>>is anyone in a production setting using software Iscsi Targets and
>>>Initiators as a side options to GNBD? i'm exploring all our options for
>>>a Xen/Cluster Suite n+2 server setup for our ISP and would like to hear
>>>peoples thoughts on the best option, rather than using a SAN with FC we
>>>have decided to go with a Intel Raid Array with standard linux to
>>>inital costs and need to find out what people are using in production
>>>export block devices.
>>>Thanks in advance
>>>Linux-cluster mailing list
>>>Linux-cluster redhat com
> Linux-cluster mailing list
> Linux-cluster redhat com
[Date Prev][Date Next] [Thread Prev][Thread Next]