[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Issue with mysql service in RHEL6.2 cluster



Ok more dumb things...

In the past I have had problems bringing up VIPs that have the subnet mask bits in the address 

try changing this line:
<ip address="10.26.240.95/24" monitor_link="on" sleeptime="2"/> 

to this
<ip address="10.26.240.95" monitor_link="on" sleeptime="2"/> 

Also remove it from the ip ref= tag as well... 

Then try starting the service.  also it may be easier to enable debug logging to help figure out what is going on with the service... but I am betting the change to the ip will probably work.

HTH,
Bill
On Wed, Jan 11, 2012 at 8:39 PM, Roka, Rajendra <rajendra roka pacificmags com au> wrote:

Yes it starts if I do manually:

 

[root atp-wwdev1 ~]# mount -t nfs 10.26.240.190:/nfs/mysql /var/lib/mysql/

[root atp-wwdev1 ~]# /etc/init.d/mysqld start

Starting mysqld:                                           [  OK  ]

 

[root atp-wwdev1 ~]# cat /var/log/mysqld.log

120112 15:28:57 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql

120112 15:28:58  InnoDB: Started; log sequence number 0 44233

120112 15:28:58 [Note] Event Scheduler: Loaded 0 events

120112 15:28:58 [Note] /usr/libexec/mysqld: ready for connections.

Version: '5.1.52'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution

 

[root atp-wwdev1 ~]# /etc/init.d/mysqld stop

Stopping mysqld:                                           [  OK  ]

root atp-wwdev1 ~]# cat /var/log/mysqld.log

120112 15:29:39 [Note] /usr/libexec/mysqld: Normal shutdown

120112 15:29:39 [Note] Event Scheduler: Purging the queue. 0 events

120112 15:29:39  InnoDB: Starting shutdown...

120112 15:29:43  InnoDB: Shutdown completed; log sequence number 0 44233

120112 15:29:43 [Note] /usr/libexec/mysqld: Shutdown complete

 

But if I start with cluster, it doesnot give any error message in /var/log/mysqld.log

 

Once again my cluster.conf is follows:

<?xml version="1.0"?>

<cluster config_version="39" name="atp_mysql">

        <clusternodes>

                <clusternode name="atp-wwdev1.test1.com.au" nodeid="1">

                        <fence/>

                        <multicast addr="239.192.200.1"/>

                </clusternode>

                <clusternode name="atp-wwdev2.test1.com.au" nodeid="2" votes="1">

                        <fence/>

                        <multicast addr="239.192.200.1"/>

                </clusternode>

        </clusternodes>

        <fencedevices>

                <fencedevice agent="fence_xvm" name="fence"/>

        </fencedevices>

       <rm>

                <failoverdomains>

                        <failoverdomain name="atp_failover" nofailback="0" ordered="1" restricted="0">

                                <failoverdomainnode name="atp-wwdev1.test1.com.au" priority="2"/>

                                <failoverdomainnode name="atp-wwdev2.test1.com.au" priority="5"/>

                        </failoverdomain>

                </failoverdomains>

                <resources>

                        <ip address="10.26.240.95/24" monitor_link="on" sleeptime="2"/>

                        <mysql config_file="/etc/my.cnf" listen_address="10.26.24.95" name="mysql" shutdown_wait="60" startup_wait="60"/>

                        <netfs export="/nfs/mysql" force_unmount="on" fstype="nfs" host="10.26.240.190" mountpoint="/var/lib/mysql" name="storage" no_unmount="on"/>

                </resources>

                <service autostart="1" domain="atp_failover" exclusive="0" name="mysql" recovery="relocate">

                        <ip ref="10.26.240.95/24"/>

                        <netfs ref="storage"/>

                        <mysql ref="mysql"/>

                </service>

        </rm>

        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>

        <cman expected_votes="1" two_node="1">

                <multicast addr="239.192.200.1"/>

        </cman>

        <totem/>

        <logging debug="off"/>

</cluster>

 

And my.cnf is follows:

[mysqld]

datadir=/var/lib/mysql

socket=/var/lib/mysql/mysql.sock

user=mysql

# Disabling symbolic-links is recommended to prevent assorted security risks

symbolic-links=0

 

[mysqld_safe]

log-error=/var/log/mysqld.log

pid-file=/var/run/cluster/mysql/mysql.pid

 

If you need any more info, please let me know.

 

Thanks

 

 

 

 

From: linux-cluster-bounces redhat com [mailto:linux-cluster-bounces redhat com] On Behalf Of Ryan Mitchell
Sent: Thursday, 12 January 2012 3:01 PM


To: linux-cluster redhat com
Subject: Re: [Linux-cluster] Issue with mysql service in RHEL6.2 cluster

 

On 01/12/2012 01:11 PM, Roka, Rajendra wrote:

Any more suggestions on this?

According to the new log, it still timed out after 60 seconds, so either that wasn't long enough either, or there is a misconfiguration and the database can't start because of it:

Jan 10 11:42:57 atp-wwdev1 modcluster: Starting service: mysql on node

Jan 10 11:42:57 atp-wwdev1 rgmanager[1690]: Starting stopped service service:mysql

Jan 10 11:42:58 atp-wwdev1 rgmanager[5252]: Adding IPv4 address 10.26.240.95/24 to eth0

Jan 10 11:43:01 atp-wwdev1 rgmanager[5401]: Starting Service mysql:mysql

Jan 10 11:44:01 atp-wwdev1 rgmanager[5657]: Starting Service mysql:mysql > Failed - Timeout Error

Jan 10 11:44:01 atp-wwdev1 rgmanager[1690]: start on mysql "mysql" returned 1 (generic error)

Jan 10 11:44:02 atp-wwdev1 rgmanager[1690]: #68: Failed to start service:mysql; return value: 1


What does it say in your mysql log?  The resource script runs the command to start the database and then waits for it to return success.  It waited 60 seconds, and hadn't received any notice that the database started or not, so it gave up.

Look in the logs to see if there is any indication as to why the database won't start.  It could be because you have the wrong configuration in /etc/my.cnf, no permissions on some critical directories, or the resource script is misconfigured.  Also, you should investigate whether you can manually start the database (after mounting the NFS mount and adding the VIP of course) outside of cluster (and compare working and failing mysql logs).

Regards,

Ryan Mitchell
Software Maintenance Engineer
Support Engineering Group
Red Hat, Inc.

Important Notice:
This message and its attachments are confidential and may contain information which is protected by copyright. It is intended solely for the named addressee. If you are not the authorised recipient (or responsible for delivery of the message to the authorised recipient), you must not use, disclose, print, copy or deliver this message or its attachments to anyone. If you receive this email in error, please contact the sender immediately and permanently delete this message and its attachments from your system. 
Any content of this message and its attachments that does not relate to the official business of Pacific Magazines Pty Limited must be taken not to have been sent or endorsed by it. No representation is made that this email or its attachments are without defect or that the contents express views other than those of the sender.

Please consider the environment - do you really need to print this email?




--
Linux-cluster mailing list
Linux-cluster redhat com
https://www.redhat.com/mailman/listinfo/linux-cluster



--
Thanks,
Bill G.
tc3driver gmail com

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]