[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] iSCSI GFS2 CMIRRORD



Hello All,
I have been successfully running a cluster for about a year. I have a question about best practice for my storage setup.

Currently, I have 2 front end nodes and two back end nodes. The front end nodes are part of the cluster, run all the services, etc. The back end nodes are only exporting raw block devices via iSCSI and are not cluster aware. The front end import the raw block and use GFS2 with LVM for storage. At this time, I am only using the block devices from one of the back end nodes.

I would like the LVMs to be mirrored across the two iSCSI devices, creating redundancy at the block level. The last time I tried this, when creating the LVM, it basically sat for 2 days making no progress. I now have 10GB network connections at my front end and back end nodes (was 1GB only before).

Also, on topology, these 4 nodes are across 2 buildings, 1 front end and 1 back end in each building. There are switches in each building that have layer 2 connectivity (10GB) to each other. I also have 2 each 10GB connections per node, and multiple 1GB connections per node. 

I have come up with the following scenarios, and am looking for advise on which of these methods to use (or none). 

1:
  • Connect all nodes to the 10GB switches.
  • Use 1 10GB for iSCSI only and 1 for other ip traffic
2:
  • Connect each back end node to each from end node via 10GB
  • Use 1GB for other ip traffic
3:
  • Connect the front end nodes to each other via 10GB 
  • Connect front end and back end nodes to 10GB switch for Ip traffic
I am also willing to use device mapper multi path if needed. 

Thanks in advance for any assistance. 

Regards,
-------
Micah Schaefer
JHU/ APL

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]