[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Cluster with shared storage on low budget

Nikola Savic wrote:
Digimer wrote:
Once, and *only* if the fence was successful, the cluster will reform.
Once the cluster configuration is in place, recovery of the file system
can begin (ie: the journal can be replayed). Finally, normal operation
can continue, albeit with one less node. This is also where the resource
manager (rgmanager or pacemaker) start shuffling around any resources
that were lost when the node went down.

  From guide you sent me, I understood that fencing to work well servers
should have IPMI available on motherboards.

  My client is going to purchase servers at Hetzner from their EQ-Line.
I asked their support if IPMI is available. Since my other client
already has server with 'em, I tried to install ipmi related packages
(like you specified in guide). IPMI service doesn't start, so I assume
it's not available or not turned on in BIOS.

That doesn't mean much. The IPMI service isn't what you use for fencing in this context. It's for diagnostics (e.g. advanced sensor readings, fan speeds, temperatures, voltages, etc.). Think of it as lm_sensors on steroids. For fencing you need to connect to the machine externally over the network via IPMI, and this will run at firmware level (i.e. you need to be able to power the machine on and off without an OS running).

  How would cluster work if no IPMI or similar technology is available
for fencing? In case one of nodes dies and no fencing is available,
cluster will hang until administrator does manual fancing?

Yes, that's about the size of it.

There are add-in cards you can use to add fencing functionality even if you don't have this built into the server, e.g. Raritan eRIC G4 and similar. I wrote a fencing agent for those, you should be able to find it in the redhat bugzilla. They can be found for about £175 or so. That may or may not compare favourably to what you can get with the servers from the vendor.

Alternatively, you can use network controllable power bars for fencing, they may work out cheaper (you need one eRIC card per server, and assuming your servers have dual PSUs, you'd only need two power bars).

Something else just occurs to me - you mentioned MySQL. You do realize that the performance of it will be attrocious on a shared cluster file system (ANY shared cluster file system), right? Unless you only intend to run mysqld on a single node at a time (in which case there's no point in putting it on a cluster file system).


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]