Issue #22 August 2006

Tips & tricks

Red Hat's customer service and support teams receive technical support questions from users all over the world. Red Hat technicians add the questions and answers to Red Hat Knowledgebase on a daily basis. Access to Red Hat Knowledgebase is free. Every month, Red Hat Magazine offers a preview into the Red Hat Knowledgebase by highlighting some of the most recent entries.

Tips from RHCEs

Growing the devices in a RAID array

As hard disk space is ever increasing, you may get replacement drives that are significantly larger than the original devices that they replace, so this tip will show how to increase the size of a RAID array using larger partitions to replace smaller partitions in the original RAID array.

We will assume that you have a RAID 5 array using three partitions (/dev/sdb1, /dev/sdc1, and /dev/sdd1) on /dev/md0. These partitions are 1 GiB each, giving you about 2 GiB of usable space. You add new disks and create three partitions (/dev/sde1, /dev/sdf1, and /dev/sdg1) of 5 GiB in size. By the end, you should have about 10 GiB of usable space.

After you have created the partitions and set the partitions type to 0xfd, you can add these devices to the array. They will become hot spares:

 
mdadm /dev/md0 -a /dev/sde1 /dev/sdf1 /dev/sdg1

Fail the original devices one at a time, ensuring that the array rebuilds after each failed device. DO NOT fail more that one of the original devices without verifying that the array has finished rebuilding. If you fail two devices in a RAID 5 array, you may destroy data!

First, fail and remove the first device, and verify that the array has finished rebuilding:

mdadm /dev/md0 -f /dev/sdb1 -r /dev/sdb1
watch cat /proc/mdstat

Once it has finished rebuilding, fail the second device:

mdadm /dev/md0 -f /dev/sdc1 -r /dev/sdc1
watch cat /proc/mdstat

Once it has finished rebuilding, fail the third device:

mdadm /dev/md0 -f /dev/sdd1 -r /dev/sdd1
watch cat /proc/mdstat

After it has finished rebuilding, you have replaced all of the 1 GiB original devices with the new 5 GiB devices. However, we are not finished yet. We have two problems: the RAID array is still only using 1 GiB of my 5 GiB devices, and the filesystem is still 2 GiB.

First, grow the RAID array. mdadm can grow the RAID array to a certain size, using the -G and -z options. The -z option can take a currently undocumented argument of max, which will resize the array to the maximum available space:

mdadm -G /dev/md0 -z max

`cat /proc/mdstat` and `mdadm -D /dev/md0` should show that the array is now using a 5 GiB device size.

Second, we need to enlarge the filesystem to match. Assuming that you have an ext3 filesystem on /dev/md0, and that you have mounted it, you can increase the size of the filesystem by using ext2online:

ext2online /dev/md0

After that command completes, you should see about 10 GiB of usable space.

Why does restarting the network interfaces on the HP Blade system that use the Broadcom 5715 NIC fail?

Release Found: Red Hat Enterprise Linux 3 Update 7, Red Hat Enterprise Linux 4 Update 3

Symptom:
Certain HP hardware ships with an optional Broadcom 5715 NIC. The bnx2 module that ships with Red Hat Enterprise Linux 3 Update 7 and Red Hat Enterprise Linux 4 Update 3 contains a bug that makes restarting these NICs fail. The BCM5715 works during network installs and after the OS has intially loaded, but subsequent restarts of the NIC will fail.

The models currently effected include the BL460C G1 and BL480C G1 blades

Solution:
Red Hat Enterprise Linux 3 Update 8 (available July 18, 2006) and Red Hat Enterprise Linux 4 Update 4 (available August 7, 2006) will ship with fixes for this problem.

A temporary fix for those who have this problem before the release dates of the fixes should update their bnx2 modules from HP.

RHEL 3
http://h18023.www1.hp.com/support/files/server/us/download/24758.html

RHEL 4
http://h18023.www1.hp.com/support/files/server/us/download/24759.html

Why do I get OOM kills when I copy a large amount of data over Network File System?

When the size of the data to be copied exceeds the size of physical memory, oom-killer starts randomly killing processes. This can be fixed by:

sysctl -w vm.lower_zone_protection 100

When lower_zone_protection is set to 100, it increases the free page threshold by 100 thereby starting page reclamation earlier and preventing NFS (Network File System) from getting far behind the kernel's memory demands.

For further details, refer to the Bugzilla entries:

https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=193542
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=163555

How do I detect a dual-core CPU on Red Hat Enterprise Linux 4?

Release Found:
Red Hat Enterprise Linux

Listed below are few tips for identifying whether a cpu is dual-core:

  • The ht flag shows that the proc has the capability to do ht, not that it is enabled.
  • If the cpu is not dual-core, physical id, cpu cores and core id fields will still be shown with ht flag turned off.

Important points to note:

  1. Physical id and core id are not necessarily consecutive but they are unique. Any cpu with the same core id are hyperthreads in the same core.
  2. Any cpu with the same physical id are threads or cores in the same physical socket.

Examples of how to find the information on physical id and core id:

[root@harwich3i root]# grep phys /proc/cpuinfo
physical id     : 0
physical id     : 8
physical id     : 12
physical id     : 4
physical id     : 8
physical id     : 12
physical id     : 0
physical id     : 4
physical id     : 8
physical id     : 12
physical id     : 0
physical id     : 4
physical id     : 8
physical id     : 12
physical id     : 0
physical id     : 4
[root@harwich3i root]# grep cores /proc/cpuinfo
cpu cores       : 2
(16 times)
[root@harwich3i root]# grep sib /proc/cpuinfo
siblings        : 4
(16 times)
[root@harwich3i root]# grep "core id" /proc/cpuinfo
core id         : 0
core id         : 8
core id         : 14
core id         : 6
core id         : 10
core id         : 12
core id         : 2
core id         : 4
core id         : 8
core id         : 14
core id         : 0
core id         : 6
core id         : 10
core id         : 12
core id         : 2
core id         : 4

If the SMP kernel, ht or dual-core are not present the following fields will not be shown:

#ifdef CONFIG_SMP
if (smp_num_siblings > 1 || c->x86_num_cores > 1) {
 int cpu = c - cpu_data;
 seq_printf(m, "physical id\t: %d\n", phys_proc_id[c - cpu_data]);
 seq_printf(m, "siblings\t: %d\n", cpus_weight(cpu_core_map[cpu]));
 seq_printf(m, "core id\t\t: %d\n", cpu_core_id[c - cpu_data]);
 seq_printf(m, "cpu cores\t: %d\n", c->x86_num_cores);
}
#endif
This means that it is either not a dual-core cpu, it is disabled, or the wrong kernel is being used.

How can I access the storage adapter on a multi-node IBM x460 (x3950) after installing Red Hat Enterprise Linux version 3 Update 8?

After installing Red Hat Enterprise Linux 3 Update 8 with a single node configuration an a multi-node IBM x460 (x3950), and then merging into a two-node configuration; in order to use the storage adapter (Adaptec 9410 SAS or ServerRAID) located on the second node, the following steps must be taken:

  1. After kudzu detects the storage adapter from the second node, invoke the following command as root:
    #/sbin/new-kernel-pkg --install --mkinitrd `uname -r`
    

    This will ensure that on the next boot, the device driver for the second node will be loaded automatically.

  2. Manually load the module by running one of the following commands as root:

    • If the second node has the Adaptec 9410:
      #/sbin/modprobe adp94xx
      
    • If the second node has the Adaptec ServerRAID 8i:
      #/sbin/modprobe aacraid 
      

How do I upgrade my Red Hat Network Proxy to the latest version?

The process of upgrading a Red Hat Network (RHN) Proxy Server is relatively simple. Note that the RHN Satellite Server must be upgraded to 4.x prior to an upgrade of the RHN Proxy Server to 4.x.

In the simplest terms an upgrade is as such:

  1. Preparation: Fully back up the RHN Proxy
  2. Reprovision that server to either Red Hat Enterprise Linux AS 3 or 4, explicitly following the software requirements documentation found in the RHN Proxy Installation Guide. Red Hat Enterprise Linux AS 4 is recommended.
  3. If applicable, restore the SSL build direction from backup to this directory: /root/ssl-build
    Modernize that SSL build directory to 4.x standards using Article #5410
  4. Install RHN Proxy Server 4.x via it's top-level parent RHN service either RHN Satellite or RHN hosted.
  5. Restore the old RHN Proxy Server's custom package repository to the default location of an RHN Proxy Server 4.0: /var/spool/rhn-proxy (that's the default, but it may be in a different location).
  6. Update, restart services and test
          up2date -uf
          /sbin/service rhn-proxy stop
          /sbin/service rhn-proxy start
    
    
Warning:

While upgrading an RHN Proxy Server which is configured as a Monitoring Scout, be aware that any probes configured against this Scout will be lost during the upgrade process.This includes version 4.0.

Introductory Notes:

  1. RHN Proxy Server can operate on Red Hat Enterprise Linux 3 or 4 AS, but Red Hat Enterprise Linux 4 AS is recommended.
  2. RHN Proxy Server can be installed or operated only if connected to RHN hosted (xmlrpc.rhn.redhat.com) or an RHN Satellite Server. RHN Proxy Server can not be connected to an earlier version of RHN Satellite Server. RHN Satellite Server needs to be upgraded first.
  3. A "provisioning" entitlement is required for RHN Proxy Server installs and upgrades. A provisioning entitlement is automatically received after purchasing RHN Proxy Server. For upgrades make sure that one is available.

Preparation

  1. Verify existing RHN Proxy is working:
    • On the RHN Proxy:
    • # up2date -uf
             
    • If RHN Proxy version 3.6 (or greater) is present:
    • # /sbin/service rhn-proxy stop
      # /sbin/service rhn-proxy start
             
    • Otherwise:
    • # /sbin/service httpd stop
      # /sbin/service squid stop
      # /sbin/service squid start
      # /sbin/service httpd start
             
    • Additionally if version 1.1.1 is present:
    • # /sbin/service rhn_auth_cache restart
             
    • On a client connected to that RHN Proxy:
    • # up2date -l
             
  2. Make a hard copy of this file for future reference: /etc/rhn/rhn.conf
  3. Completely back up RHN Proxy Server
  4. Read the RHN Proxy Installation Guide for software and hardware requirements. The software requirements may differ depending on the installation method, either kickstart or from CD. For instructions on using the web interface to perform actions such as provisioning, checking entitlements, and others, see the RHN Reference Guide. The Client Configuration Guide contains information on how to connect systems to Red Hat Network (RHN), thus it may be a useful reference.

Reprovision

  1. Deactivate the proxy through the RHN web interface. Click on System->Details->Proxy and finally 'Deactivate Proxy'
  2. Re-install the operating system on the machines in question will be needed. This can be accomplished either via RHN's provisioning mechanism or by installing from an Red Hat Enterprise Linux CD. Regardless of the method choosen, closely follow the software requirements specified in the RHN Proxy Installation Guide.
  3. If the CD Installation method is chosen, follow the extra steps below:
    • Configure up2date to connect to RHN to mirror the machine's previous configuration.
    • Register with RHN. Use a re-activation key to ensure that the machine is associated with its previous profile.

Restoring SSL build directory and generating a tar archive

The goal of this section is to refactor the SSL build directory into a format compatible with RHN v3.7 or better products. A secondary goal is to create an SSL tar archive to be used during the installation of RHN Proxy 3.7 or better (RHN hosted connected installs only). In addition to this document, refer to the Client Configuration Guide for information on using and generating SSL certificates.

There are several scenarios:
  • Current RHN Proxy is connected to RHN Satellite Server either directly or indirectly through a chain of RHN Proxies
    • This entire section III can be skipped as the appropriate SSL keys and certificates will be generated and installed during the new installation of RHN Proxy Server, i.e. go to section IV.
    • If continued, it is assumed the machine in question is connected directly to RHN's hosted service or indirectly through a chain of RHN Proxies.
    • If the rhn-ssl-tool is not on the system, it can be subscribed to the Red Hat Network Tools channel by running up2date rhns-certs-tool.
  • Upgrading from RHN Proxy Server version 1.1.1
  • Refer to this article.

    • References to fetch the build tree from /etc/sysconfig/rhn has the intentions for the user to restore that from backup to /root/ssl-build
  • Upgrading from RHN Proxy Server version 3.2.0
    1. From the previously created backups, restore /etc/sysconfig/rhn/ssl into /root/ssl-build
    2. Change directory to /root
    3. Reference the Client Configuration Guide to determine the correct parameters to rhn-ssl-tool --gen-ca used in the next step.
    4. Run (note the temporary directory):
    5. rhn-ssl-tool --gen-ca --dir ssl-build-temp <EXTRA OPTIONS>
      • Include the parameters from the previous step at the end of the command. Enter any bogus password when prompted. The point of this step is to generate a new rhn-ca-openssl.cnf.
    6. Copy ssl-build-temp/rhn-ca-openssl.cnf to the current directory.
    7. Delete ssl-build temp directory.
    8. Use the rhn-ssl-tool to generate a server SSL RPM and tar archive for this RHN Proxy:
    9.  rhn-ssl-tool --gen-server
      ^C #i.e. hit control-C at the password prompt
      
    10. Move the "unknown" directory just generated to the machine name directory: the machine name of that RHN Proxy is determined by dropping the domain from the hostname: e.g. my.proxy.example.com's machine name is my.proxy
    11. mv ssl-build/unknown ssl-build/
      cd ssl-build/
      
      cat server.crt server.key > server.pem
      cd ../..
      rhn-ssl-tool --gen-server --rpm-only
      
    12. If this command is not run on the RHN Proxy, mv the unknown directory to the appropriate location and include --set-hostname HOSTNAME
    13. Finally, follow the recommendations of the Client Configuration Guide for storage of the tree.
  • Upgrading from RHN Proxy Server version 3.2.2
  • Note: if this RHN Proxy Server was installed as version 3.2.0, follow those particular instructions.
    1. From the previously created backups, restore /etc/sysconfig/rhn/ssl into /root/ssl-build ("previously created backups" mean the archive SSL build tree, whether it comes from some internal archive repository or local machines)
    2. Change directory to /root
    3. Use the rhn-ssl-tool to generate a server SSL RPM and tar archive for this RHN Proxy (If this command is not run on the RHN Proxy, include --set-hostname HOSTNAME):
      cd ssl-build/
      cat server.crt server.key > server.pem
      cd ../..
      rhn-ssl-tool --gen-server --rpm-only
      
    4. Finally, follow the recommendations of the Client Configuration Guide for storage of the ssl-build tree.
  • Upgrading from RHN Proxy Server version 3.6
  • Refer Article 5410

    Follow the recommendations of the Client Configuration Guide for storage of the ssl-build tree.

    If steps 3, 4, 5, or 6 are followed there should now be a tar archive (new for version 3.7) in the /root/ssl-build/MACHINE-NAME/ directory in this format: rhn-org-httpd-ssl-archive--1.0-RELEASE.tar

Install RHN Proxy 4.0

Install RHN Proxy Server via it's top-level parent RHN service, which is either RHN Satellite or RHN hosted. Refer to the Proxy Installation Guide for instructions. Use the the hard-copy of /etc/rhn/rhn.conf for values to enter into the installer (especially traceback_mail, HTTP proxy settings, etc.). When asked, upload the ssl tar archive created previously (RHN hosted only).

Restore custom package repository

Restore old RHN Proxy Server's custom package repository to the default location of an RHN Proxy Server 4.0:

restore /var/up2date/packages (or /var/spool/rhn-proxy if that applies) to /var/spool/rhn-proxy
chmod 0750 /var/spool/rhn-proxy
chown apache.apache /var/spool/rhn-proxy
mkdir -m 0750 -p /var/spool/rhn-proxy/list
chown apache.apache /var/spool/rhn-proxy/list
Update, restart services, and test
up2date -uf
/sbin/service rhn-proxy stop
/sbin/service rhn-proxy start

Is it possible to modify the heartbeat timers in cluster suite?

Tunable parameters for heartbeat checks are in /proc/cluster/config/cman as hello_timer, deadnode_timer and max_retries. They are represented in seconds and take effect immediately. Any changes made should be carefully considered because only max_retries is safely changed on the fly whereas changes for hello_timer and deadnode_timer should be made after loading the cman module but before joining the cluster (cman_tool join), as they need to be the same across all nodes.

Note: If the value is set too low, the machine could stop working while doing something heavy processing

For more technical details:
http://www.redhat.com/archives/linux-cluster/2005-October/msg00070.html

The information provided in this article is for your information only. The origin of this information may be internal or external to Red Hat. While Red Hat attempts to verify the validity of this information before it is posted, Red Hat makes no express or implied claims to its validity.

This article is protected by the Open Publication License, V1.0 or later. Copyright © 2006 by Red Hat, Inc.