[Linux-cluster] help configuring HP ILO

ESGLinux esggrupos at gmail.com
Tue May 5 16:54:04 UTC 2009


Hello,

Thanks for your answer...

2009/5/5 Ian Hayes <cthulhucalling at gmail.com>

> For hostname you can put the FQDN or IP address...
>
> I believe that you're a bit confused what iLO is capable of.


I absolutelly agree with you ;-)




> IP3 and IP6 are for the iLO, the cluster can't use them for networking.


I don't use for networking (I think...) I only want to use it to fence....
(i´m begin to think this is my mistake)


> The cluster members need to be able to reach the iLO (IP3 and 6 in this
> case) from eth0 or eth1.


I think I could reach the iLO from the interfaces ILO (In my configuration
ethMng, IP3 and IP6)


In a 2-node cluster, this can be as simple as connecting eth0 or eth1 on one
> node to the iLO of the other node via crossover cable. The iLO is its own
> device that exists outside of the operating system.
>
> Here's an example of a cluster that I've built previously that is similar
> to your setup
>
> Host1:
> eth0 192.168.0.1 (host1)
> eth1 10.1.1.1 (host1-management)
> iLO 10.1.1.2
>
> Host 2
> eth0 192.168.0.2 (host2)
> eth1: 10.1.1.3 (host2-management)
> iLO 10.1.1.4
>
> All cluster management communication in this cluster is via eth1. I
> specified host1-management and host2-management as the hostnames in the
> cluster config to partition off cluster traffic from the interfaces that are
> actually doing the VIP work. The nodes provide a virtual IP on eth0, and a
> script service, with the daemon bound to the VIP. For the iLOs and eth1, you
> could either plug them into a switch on their own non-trunked VLAN, or you
> can connect eth1 of host1 to the iLO of host 2, and eth1 of host2 to the iLO
> of host1. Both eth1 and iLOs don't need a gateway since they're on the same
> subnet.
>

If I have understand, if I use a dedicated switch, I must to connect IP2,
IP3, IP5 and IP6 to the same switche and IP1 and IP4 to the service switch,
isn´t it?




>
> To configure the iLO, you just set up the correct IP address, mask and
> create a username and password that has the appropriate privileges (power).
> These get put into the cluster.conf file via system-config-cluster or Luci.
> You would need to create two fence resources. In the above case, I would
> create a Fence_Host_1 and Fence_Host_2 dence devices, using fence_ilo.
>

this is ok, is what I have done but with fence_ipmi


>
> Fence_Host_1 would have the IP address of host1's iLO, a valid login and
> password for that iLO. Host2 is similar, but has the IP address of host2's
> iLO. Attach Fence_Host_1 to host1 and Fence_Host_2 to host2. This way, the
> entire cluster knows "to fence host1, I see that I need to use the
> Fence_Host_1 method. Fence_host_1 uses fence_ilo as its method, target ip
> address 10.1.1.2, username foo, password bar. To fence host2, it uses
> fence_ilo as its method, target address 10.1.1.4, username foobar, password
> barfoo". These get passed to the fence_ilo script and it handles the rest.
> You can play with this my manually running fence_ilo.


I think I have understood

My problem was that I thought I can reach the iLO interfaces  only using the
iLO interfaces.

I´ll try this configuration and I will post my results,

thanks for your answer

ESG


>
>
> On Tue, May 5, 2009 at 11:37 AM, ESGLinux <esggrupos at gmail.com> wrote:
>
>> Hello all,
>>
>> I´m configuring a 2 nodes cluster on 2 servers HP Proliant DL165 G5
>> This servers have HP ProLiant Lights-Out 100 Remote Management and I want
>> to use it as fencing device.
>>
>> My first idea was to configure them with IPMI and it works almost fine but
>> I have detected that when I have the network down, the fence devices doesn't
>> work because the nodes can't reach the othe node to fence it.
>>
>> I have tried with a dedicated switch and a direct cable but it doesn't
>> work and I begin to think I´m doing something wrong because the interface
>> with the ipmi configured doesnt appear on the servers. I´ll try to explain:
>>
>> I have
>> node1
>> eth0: IP1
>> eth1: IP2
>> In the bios I have configured ethMng : IP3
>>
>> node2
>> eth0: IP4
>> eth1: IP5
>> In the bios I have configured ethMng : IP6
>>
>>
>> with the network up all works fine and I can use fence_node to fence the
>> nodes, and the cluster works fine.
>>
>> But, if I disconnect IP1, IP2, IP3, IP4 (This simulate a switch fail) I
>> expect the cluster become fencing with IP3 and IP6 but the system doesn't
>> find these IP´s and all the cluster hungs.
>>
>> Looking the fence devices avaliable with conga I have seen that there is
>> one called:
>> **
>> HP iLOwith this parameters to configure:
>> Name
>> Hostname
>> Login
>> Password
>> Password Script (optional)
>>
>> All are self explanatory but I don´t know what to put on Hostname (which
>> hostname? the same machine, the other? FQDN.. IP....)
>>
>> So, I have 2 questions:
>> If I use IPMI what I´m doing wrong?
>> and
>> If I use HP iLO, what I need to configure???
>>
>> any idea, manual, doc, suggest.... is welcome
>>
>> thanks in advance
>>
>> ESG
>>
>>
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20090505/939d2322/attachment.htm>


More information about the Linux-cluster mailing list