Issue #14 December 2005

Tips & tricks

Red Hat's customer service team receives technical support questions from users all over the world. As they are received, Red Hat technicians add the questions and answers to the Red Hat Knowledgebase on a daily basis. Individuals with a redhat.com login are granted access. Every month, Red Hat Magazine offers a preview into the Red Hat Knowledgebase by highlighting some of the most recent entries.

Tips from RHCEs

Remote printing with CUPS—securely

Red Hat uses the Common Unix Print System, or CUPS, to handle local and remote printing. One of the benefits of CUPS is the ability to remotely manage a printer through a web browser. Unfortunately, this is done "in the clear", rather than a secure connection, like SSL. Luckily, we can use SSH to solve this problem.

First, we ensure that CUPS is properly configured on the remote machine with the system-config-printer utility. Second, we create a user account on the remote system:

     useradd doug -G sys

The "-G" adds the user to the "sys" group, which allows printer administration. Remember to assign a password to the new account.

Now, access the system:

     ssh -Y doug@hostname.domain.tld "firefox http://localhost:631"

The "-Y" will allow Firefox to execute on the remote machine, but display on the local machine. This will allow secure remote administration of CUPS.

How do I utilize direct routing on Piranha 0.7.7 using iptables?

Note: This contains relevant information on how to make direct routing to work with Piranha, it does not explain how to configure Piranha services.

Setting up Piranha:

  1. Ensure that the following packages are installed on the LVS directors:
    • piranha
    • ipvsadm
  2. Ensure that the following packages are installed on the LVS real servers:
    • iptables
    • arptables_jf
  3. Set up and log in to the Piranha web-based GUI. Follow the instructions in the Red Hat Enterprise Linux 3 Manual:
    https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/3/html/Cluster_Administration/ch-lvs-piranha.html
  4. Configure Piranha for Direct Routing. In the "GLOBAL SETTINGS" tab of the Piranha configuration tool, enter the primary server's public IP address in the box provided. The private IP address is not needed/used for Direct Routing configurations. In a direct routing configuration, all real servers as well as the LVS directors share the same virtual IP addresses and should have the same IP route configuration. Click the "Direct Routing" button to enable Direct Routing support on the Piranha LVS director node(s).
  5. Configure the services and the real servers using the Piranha GUI.
  6. Set up each of the real servers.

Setting up the Real Servers Using iptables

We use an IP tables rule to create a transparent proxy so that a node will service packets sent to the virtual IP address(es), even though the virtual IP address does not exist on the system.

Advantages:
  • Simple to configure
  • Avoids the LVS "ARP problem" entirely. Because the virtual IP address(es) only exist on the active LVS director, there is no ARP problem.

Disadvantages:

  • Performance. There is an overhead in forwarding/masquerading every packet.
  • Impossible to reuse ports. For instance, it is not possible to run two separate Apache services bound to port 80, because both must bind to INADDR_ANY instead of the virtual IP addresses.

Instructions:

  1. Back up your iptables configuration.
  2. On each real server, run the following for every VIP / port / protocol (TCP, UDP) combination intended to be serviced for that real server:
    iptables -t nat -A PREROUTING -p  -d  \
    --dport  -j REDIRECT
    

    This will cause the real servers to process packets destined for the VIP which they are handed.

    service iptables save
    chkconfig --level 2345 arptables_jf on
    

    The second command will cause the system to reload the arptables configuration we just made on boot before the network is started.

How can I run Certificate System as a non-root user but still use privileged ports like 443 and 80?

  1. Login as root to the machine where Red Hat Certificate System is to be installed and execute the following:
    # rpm -ivh rhcs*.rpm
    
  2. Run the setup. Root privileges or being the root user and root group may be needed on some stages to configure Certificate Authority (CA). For example, cert-ca.
    # /opt/redhat-cs/setup/setup
    
  3. Choose privileged ports like 443, 80 etc.
  4. Restart CA:
    # /opt/redhat-cs/cert-ca/restart-cert
    
  5. Make sure CA can run on the above mentioned ports. To test it, use a browser and go to https://host:443/
  6. Create a local user and add it to its assigned group.
  7. Go to the cert system instance /opt/redhat-cs/cert-ca/config/ and edit the magnus.conf file. Add the following lines:
    chown -R "specific_username:specific_group" /opt/redhat-cs/cert-ca/
    chown "specific_username:specific_group" /opt/redhat-cs/alias/cert-ca*
    chmod 664 /opt/redhat-cs/alias/secmod.db
    export LD_ASSUME_KERNEL=2.4.1
    

    For example, to allow the user redhat from group redhat to run Certificate System, we change the lines to:

    chown -R "redhat:redhat" /opt/redhat-cs/cert-ca/
    chown "redhat:redhat" /opt/redhat-cs/alias/cert-ca*
    chmod 664 /opt/redhat-cs/alias/secmod.db
    export LD_ASSUME_KERNEL=2.4.1
    
    
  8. Restart the Certificate system:
    # /opt/redhat-cs/cert-ca/restart-cert
    

Note: If the parameter LD_ASSUME_KERNEL=2.4.1 is not set, then IBM JRE would crash trying to read /proc/self/maps. This would be a known issue documented in this bugzilla report: https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=165351

How do I enable host-based authentication so users can login via SSH without a password?

The following is the process for setting up host based authentication using SSH:

On the Server

  1. Create a file in the user's home directory named .shosts with at least one entry:
    clienthostname.clientdomain.com  username
    

    This file must be read/write for this user only:

    # chown username:username ~username/.shosts
    # chmod 600 ~username/.shosts
    
  2. Edit the /etc/ssh/sshd_config file and add the following options:
    HostbasedAuthentication yes
    IgnoreRhosts no
    
  3. Run the command:
    # ssh-keyscan -t dsa clienthostname >> /etc/ssh/ssh_known_hosts
    (for dsa encryption)
    

    or

    # ssh-keyscan -t rsa clienthostname >> /etc/ssh/ssh_known_hosts
    (for rsa encryption)
    
  4. Restart the SSH server:
    # service sshd restart
    

On the client:

  1. Edit the /etc/ssh/ssh_config file and under Host *, add the following options:
    HostbasedAuthentication yes
    EnableSSHKeysign yes
    
  2. This should allow you to login without a password. Additionally, you can use ssh-keyscan to add additional client machines to /etc/ssh/ssh_known_hosts.

Note: Utilizing host-based authentication is only recommended on a secure network. Be aware that once the host is compromised, it also comprimises the machines that utilize host-based authentication.

How do I make device mapper multipath ignore my local disks when generating the multipath maps in Red Hat Enterprise Linux 4?

Release Found: Red Hat Enterprise Linux 4

Limitation:
For this article, /dev/sda will be used as an example of the external disk.Note that dm multipath is only supported by Red Hat Technical Support on Red Hat Enterprise Linux 4 and subsequent versions thereafter.

Some machines have local SCSI cards for their internal disks. In these cases, device mapper is not recommended for seaching multi-pathed devices.

Determine which disks are the internal disks and mark them as the ones to blacklist. Prior to blacklisting these devices, notice that multipath -v2 shows the local disk in its multipath map:

[root@rh4cluster1 ~]# multipath -v2

create: SIBM-ESXSST336732LC____F3ET0EP0Q000072428BX1
[size=33 GB][features="0"][hwhandler="0"]
\_ round-robin 0 
  \_ 0:0:0:0 sda  8:0     

device-mapper ioctl cmd 9 failed: Invalid argument
device-mapper ioctl cmd 14 failed: No such device or address
create: 3600a0b80001327d80000006d43621677
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0 
  \_ 2:0:0:0 sdb  8:16    
  \_ 3:0:0:0 sdf  8:80    

create: 3600a0b80001327510000009a436215ec
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0 
  \_ 2:0:0:1 sdc  8:32    
  \_ 3:0:0:1 sdg  8:96    

create: 3600a0b80001327d800000070436216b3
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0 
  \_ 2:0:0:2 sdd  8:48    
  \_ 3:0:0:2 sdh  8:112   

create: 3600a0b80001327510000009b4362163e
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0 
  \_ 2:0:0:3 sde  8:64    
  \_ 3:0:0:3 sdi  8:128   

As highlighted in the example above, device mapper has mapped /dev/sda in its multipath maps. In order to stop this from happening, edit the /etc/multipath.conf file. The following section needs to be changed:

devnode_blacklist {
      wwid 26353900f02796769
      devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
      devnode "^hd[a-z][0-9]*"
      devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
}

Add the blacklisted internal disks to the highlighted line above. For this example, sda will be used as an example of the internal drive. The section will look like this:

devnode_blacklist {
      wwid 26353900f02796769
      devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st|sda)[0-9]*"
      devnode "^hd[a-z][0-9]*"
      devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
}

Run the commands:

multipath -F
multipath -v2

The local disks should no longer be listed in the new multipath maps:

[root@rh4cluster1 ~]# multipath -F
[root@rh4cluster1 ~]# multipath -v2
create: 3600a0b80001327d80000006d43621677
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0 
  \_ 2:0:0:0 sdb  8:16    
  \_ 3:0:0:0 sdf  8:80    

create: 3600a0b80001327510000009a436215ec
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0 
  \_ 2:0:0:1 sdc  8:32    
  \_ 3:0:0:1 sdg  8:96    

create: 3600a0b80001327d800000070436216b3
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0 
  \_ 2:0:0:2 sdd  8:48    
  \_ 3:0:0:2 sdh  8:112   

create: 3600a0b80001327510000009b4362163e
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0 
  \_ 2:0:0:3 sde  8:64    
  \_ 3:0:0:3 sdi  8:128  

How do I allow a GFS mounting node back into my cluster after it has been fenced using a McData Fibre Channel switch?

Release Found: Red Hat Enterprise Linux 3, GFS 6.0

Limitation: This article only applies for Red Hat Enterprise Linux 3, GFS 6.0 and the McData Fibre Channel switch. This is not guaranteed to work with other switches or versions other than that described.

In order to effectively fence a node, the McData Fibre Channel switch disables the port which the node is connected to. Once the fenced node has been fix and is ready to join the GFS cluster, the fenced node's port on the McData Fibre Channel switch should be re-opened.

Using the command below, it is possible to re-enable the node's port:

fence_mcdata -a IPaddress -l login -p password -n port -o enable

Parameters for the >tt class="command">fence_mcdata command are listed below:

-a IPaddress
   IP address of the switch.

-h
   Print out a help message describing available options, then exit.

-l login
   Username name for the switch.

-n port
   The port number to disable on the switch.

-p password

-o action
   Can be set to 'enable' or 'disable'

How do I set up a persistent symlink and refer it to /etc/fstab instead of the device partition?

Release Found: Red Hat Enterprise Linux 3

There is a second call to swapon after devlabel is started which makes swap devices immune from changing device names. In order to do so, refer to the instructions below:
  1. Make a devlabel link from swap to the desired device:
    devlabel add -d /dev/sdX -s /dev/swap 
    
  2. Modify /etc/fstab to point to the /dev/swap device.
  3. Reboot.

Note: Extraneous errors may occur. This should not impede the process.

The ifconfig program does not show any of my clustered service IP addresses, how do I check if the TCP/IP address is up for my clustered service?

The address for a clustered service IP is managed using the ip command, provided by the iproute package. The command creates an IP address alias for the cluster service on a network interface.

IP aliases cannot be viewed using the ifconfig command. The ip command can show all the addresses an interface has, so run the command on the cluster node that is delivering the service.

In this example, the host node3 is providing two services, each service has an IP address, that IP address is an alias on eth0:

[root@node2 ~]# ip addr show eth0
2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:ab:ed:9b brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.2/24 brd 192.168.1.255 scope global eth0
    inet 192.168.1.112/32 scope global eth0
    inet 192.168.1.111/32 scope global eth0

    inet6 fe80::20c:29ff:feab:ed9b/64 scope link
       valid_lft forever preferred_lft forever

The information provided in this article is for your information only. The origin of this information may be internal or external to Red Hat. While Red Hat attempts to verify the validity of this information before it is posted, Red Hat makes no express or implied claims to its validity.

This article is protected by the Open Publication License, V1.0 or later. Copyright © 2004 by Red Hat, Inc.