November 16, 2006

Rate this page del.icio.us  Digg slashdot StumbleUpon

Tips & tricks

Red Hat's customer service and support teams receive technical support questions from users all over the world. Red Hat technicians add the questions and answers to Red Hat Knowledgebase on a daily basis. Access to Red Hat Knowledgebase is free. Every month, Red Hat Magazine offers a preview into the Red Hat Knowledgebase by highlighting some of the most recent entries.

Tips from RHCEs

USB when the drivers aren't available

As a way to save a few valuable pennies on newer PCs, manufacturers are increasingly getting rid of the good old PS/2 keyboard and mouse interfaces. As a result, some recent systems only ship with USB ports to which we need to connect a USB keyboard and mouse.

USB is all good, but what if the driver for your USB controller is not loaded? In practice, this is not a problem, as Red Hat loads the ehci- hcd and uhci-hcd drivers automatically at boot time.

There are situations, namely in emergency mode, where the USB drivers won't be available. So you won't even be able to enter a command. This is due to the fact that in emergency mode all drivers need to be provided in the initrd file under /boot, and USB is not there by default. The trick is to add those drivers, so that they will be available earlier. The 'mkinitrd' command can do precisely that with the '--with' argument (this only works under RHEL4):

mkinitrd --with=ehci-hcd --with=uhci-hcd  /boot/newinitrd-`uname -r`.img
`uname -r`

Add a new entry in your grub.conf file (always do backups!) that points to this new initrd image, and you're done! Your USB keyboard now works in emergency mode.

How do I copy a file that is larger then 4GB across to a USB external drive?

By default, the USB device is mounted as a vfat filesystem rather than ext3. The vfat filesystem has a maximum filesize limit of 4GB (minus 1 byte). Most filesystems are formatted to vfat because it works on multiple operating systems. However to copy a file larger than 4GB, it has to be changed to the ext3 filetype. The following steps will change the file system to ext3:

  1. Backup all the data from the USB device.
  2. Format the drive to be ext3 by running the following command:
    mkfs.ext3 /dev/[device_name]
    

    where [device_name] can be sdX

  3. Remount it and then copy the file to the usb drive

Note: The Windows operating system will not read ext3 filesystems without third party applications.

Why do I see the error 'rndc: connection to remote host closed' when I try to start named?

One possible cause for this error is an incorrectly referenced rndc key. The rndc symmetric encryption key is contained in the /etc/rndc.key. This file is included into the /etc/named.conf bind nameserver and /etc/rndc.conf rndc utility configuration files. Bind and rndc use the rndc key to encrypt their communications.

By default, named.conf references the rndckey within the rndc.key file, as can be seen in this named.conf statement:
controls {
        inet 127.0.0.1 allow { localhost; } keys { rndckey; };
}

If named cannot find the rndckey in /etc/rndc.key, it will report the error 'rndc: connection to remote host closed'.

When using rndc-confgen -a to create a new rndc key, the new key will be called rndc-key by default. Therefore, the /etc/rndc.key file will need to be edited and the key name changed to rndckey for the sake of named. Likewise the command:

rndc-confgen -a -k rndckey

will give the key the correct name as referenced in named.conf.

Here are some points to keep in mind when setting up named and rndc:

  • Ensure that the name of the rndc key referenced in named.conf is the same as the name of the key in /etc/rndc.key. This should berndckey.
  • If using a chroot environment, make sure /etc/rndc.key is a soft link to /var/named/chroot/etc/rndc.key.
  • Check the permissions and ownership of the rndc.key file. Permissions should be 640 with owner:group of root:named.

Why is my LD_LIBRARY_PATH variable not being set in gnome-terminal despite the value being set in bash_profile or profile?

Enable the option "Run command as login shell" for gnome-terminal and LD_LIBRARY_PATH will be set in gnome-terminal. To enable the option follow the below procedure according to the requirnment:

  • To set the login_shell to true on a per user basis, use the command below:
    $ gconftool-2 --set --type boolean /apps/gnome-terminal/profiles/Default/login_shell true
    
  • To modify the settings as a default option on a system-wide basis, use the command below (all on one line):
    $ gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.defaults 
    	--set --type boolean /apps/gnome-terminal/profiles/Default/login_shell true
    
  • To modify the settings as a mandatory option on a system-wide basis, use the command below (all on one line):
    
    $ gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.mandatory 
    	--set --type boolean /apps/gnome-terminal/profiles/Default/login_shell true
    

Note: gconftool-2 is a utility supplied in the GConf2 package.

How do I mount a GFS filesystem without starting the cluster in Red Hat Enterprise Linux 4?

A GFS filesystem can be mounted on one machine without the need to start the cluster services. The trick is to use the "lock_nolock" locking protocol.

There are a number of ways to do this:

  1. Via gfs_tool
    1. Make sure that the GFS module is loaded:
      modprobe gfs
      
    2. To prepare GFS for the actual mount command, execute:
      gfs_tool margs lockproto=lock_nolock
      
    3. Mount the GFS filesystem, like so:
      mount -t gfs /dev/VolGroup00/LogVol01 /mount/point
      

      The gfs_tool command has to be performed for each GFS filesystem to be mounted. For example:

      gfs_tool margs lockproto=lock_nolock
      mount -t gfs /dev/VolGroup00/LogVol02 /mount/data
      gfs_tool margs lockproto=lock_nolock
      mount -t gfs /dev/VolGroup00/LogVol03 /mount/shared
      
  2. Directly mounting

    Another way to achieve the same result is to pass the lockproto option during mounting. The example above can be accomplished with the following commands:

    mount -t gfs /dev/VolGroup00/LogVol02 /mount/data -o lockproto=lock_nolock
    mount -t gfs /dev/VolGroup00/LogVol03 /mount/shared -o lockproto=lock_nolock
    

    Again, make sure that the gfs module is loaded before mounting.

Note: This proves useful for situations where the cluster is down and data needs to be accessed from GFS or for back-up purposes. Another machine with an attached tape device can also mount the gfs and back it up.

Warning: Although the gfs mount will be accessible to other machines, do not mount with the lock_nolock option on multiple machines at the same time or the data will be corrupted.

How many connections can I have open at one time using the Sockets Direct Protocol (SDP)?

The Sockets Direct Protocol (SDP) is a protocol that maps standard socket operations onto native InfiniBand Remote Direct Memory Access (RDMA) operations. This allows socket applications to run unchanged and still receive most of the performance benefits of Infiniband.

As of Open Fabrics Enterprise Distribution (OFED) version 1.0 (which will be available on Red Hat Enterprise Linux 4.5 and Red Hat Enterprise Linux 5), the SDP kernel code can only support approximately 800 simultaneous connections being open. If more connections are needed, SDP should not be used. Either IP over InfiniBand (IPoIB) or direct RDMA via one of the various other methods should be chosen as an alternative. This limitation has been removed in the upstream OFED 1.1 release, and will be removed from Red Hat products when the OFED 1.1 code is released as part of a regular update to Red Hat Enterprise Linux 4 and Red Hat Eneterprise Linux 5.

How do I configure access controls with Red Hat Enterprise Linux 4, OPenLDAP, NIS Netgroups and OpenSSH?

Release Found: Red Hat Enterprise Linux 4

Assumption:

This article assumes that there is a correctly configured LDAP server.

On the LDAP server:

  1. Define three users, two users will be in a netgroup called "myadmins" which will have access to specified system and the 3rd which will not have access to system for testing purposes.
  2. Create an Organizational Unit "netgroup". For example:
    dn: ou=netgroup,dc=example,dc=com
    objectClass: organizationalUnit
    ou: netgroup
    
  3. Define the netgroup entry containing the nisNetgroupTriple which is in the format of (host,user,NIS-domain). This example focuses on the user only:
    dn: cn=myadmins,ou=netgroup,dc=example,dc=com
    objectClass: nisNetgroup
    objectClass: top
    cn: myadmins
    nisNetgroupTriple: (-,user1,-)
    nisNetgroupTriple: (-,user2,-)
    

On the Client:

  1. Modify /etc/ssh/sshd_config to use PAM for authentication:
    UsePAM yes
    
  2. Modify /etc/ldap.conf to provide the correct search suffix for the netgroup:
    nss_base_netgroup ou=netgroup,dc=example,dc=com?one
    
  3. Modify /etc/nsswitch.conf to query the LDAP server for netgroup information:
    netgroup: ldap
    
  4. Modify /etc/security/access.conf to allow only local logins and "myadmins" netgroup access:
    +:ALL:LOCAL
    +:@myadmins:ALL
    -:ALL:ALL
    
  5. Modify /etc/pam.d/system-auth to require the pam_access.so module. The account stack should reflect something like this:
    account required /lib/security/$ISA/pam_unix.so
    account [default=bad success=ok user_unknown=ignore service_err=ignore system_err=ignore] /lib/security/$ISA/pam_ldap.so
    account require /lib/security/$ISA/pam_access.so
    
  6. Test the setup by logging in via a ssh client with "user1","user2", and the user not in the netgroup "myadmins".
Rate this page del.icio.us  Digg slashdot StumbleUpon

The information provided in this article is for your information only. The origin of this information may be internal or external to Red Hat. While Red Hat attempts to verify the validity of this information before it is posted, Red Hat makes no express or implied claims to its validity.

This article is protected by the Open Publication License, V1.0 or later. Copyright © 2006 by Red Hat, Inc.