SolutionsPrivate Clouds: Cloudforms PaaS: OpenShift Public Clouds Cloud Portfolio Building Clouds Cloud Partner Program ResourcesMedium & Large Enterprise Enterprise Solutions Business Intelligence Collaboration & Content Management Enterprise Applications Security Web Applications
Issue #18 April 2006
- Inside Fedora Core 5
- Introduction to Eclipse on Fedora
- Confessions of an Eclipse convert
- FUDCon Friday
- Podcast: Fedora Reloaded: Episode 5
- Podcast: The future of the Fedora community
- Red Hat to acquire JBoss
- Podcast: Certified engineer and jazz musician lives to improvise
- Video: He came, he saw, he got a job
- Red Hat plans Summit IP panel
- UNC Symposium on Intellectual Property, Creativity, and the Innovation Process
- Opening Red Hat Knowledgebase
- Video: Volunteers join Sri Lanka tsunami relief effort
- Virtualization: What's happening lately?
- Video: CD-adapco lowers costs, increases performance with GFS
From the Inside
In each Issue
- Editor's blog
- Red Hat speaks
- Ask Shadowman
- Tips & tricks
- Fedora status report
- Podcast (XML)
- Magazine archive
Tips & tricks
Red Hat's customer service team receives technical support questions from users all over the world. As they are received, Red Hat technicians add the questions and answers to the Red Hat Knowledgebase on a daily basis. Individuals with a redhat.com login are granted access. Every month, Red Hat Magazine offers a preview into the Red Hat Knowledgebase by highlighting some of the most recent entries.
Tips from RHCEs
by Richard Keech, Red Hat Certified Engineer®
Many system administrators may be in the habit of re-booting their systems to make partition changes visible to the kernel. With Red Hat® Enterprise Linux® this is not usually necessary. The partprobe command, from the parted package, informs the kernel about changes to partitions. After all, anything that can help you avoid a re-boot has to be a good thing!
# cat /proc/partitions major minor #blocks name 3 0 58605120 hda 3 1 200781 hda1 3 2 2040255 hda2 3 3 56364052 hda3 8 0 1018880 sda 8 1 10224 sda1 # partprobe # cat /proc/partitions major minor #blocks name 3 0 58605120 hda 3 1 200781 hda1 3 2 2040255 hda2 3 3 56364052 hda3 8 0 1018880 sda 8 1 10224 sda1 8 2 1008640 sda2
How do I configure Red Hat Enterprise Linux 3 or 4 to access ISCSI storage?
by Gary Case
After proper configuration, iSCSI-based storage will appear as a standard SCSI disk on a Red Hat Enterprise Linux 3 or 4 system.
Initiator configuration, Part 1
In iSCSI parlance, the device where data is stored is called the target. This is usually a SAN or NAS device like an EMC Clariion, Hitachi TagmaStore, IBM System Storage or NetApp Filer. The program or device on the server that handles communication with the iSCSI target is called the initiator. Red Hat ships a software-based initiator with RHEL.
- Install the iscsi-initiator-utils package
After registering the system with the Red Hat Network (RHN) run this command to install the iscsi initiator pacakge:
# up2date iscsi-initiator-utils
- Create an /etc/initiatorname.iscsi file
Each iSCSI device on the network, be it initiator or target, has a unique iSCSI node name. Red Hat uses the iSCSI Qualified Name (IQN) format with the initiator that ships with Red Hat Enterprise Linux. In the IQN format, a node name consists of a predefined section, chosen based on the initiator manufacturer, and a unique device name section which is editable by the administrator.
- The entire node name can be up to 223 bytes in length
- No white space is allowed
- Node names are not case sensitive
- The following ASCII characters can be used:
- dash ('-')
- dot ('.')
- colon (":")
- numbers 0-9
- lower-case letters a-z
iSCSI Node Name Guidelines
The node name of the initiator is stored in the /etc/initiatorname.iscsi file. Red Hat recommends the use of the command /sbin/iscsi-iname to generate a random node name, which can then be customized by the administrator. Using the system name or function (i.e. mail-server-1 or oracle-3) as the unique portion of the name can simplify iSCSI administration. An example initiatorname.iscsi file is shown below:
## DO NOT EDIT OR REMOVE THIS FILE! ## If you remove this file, the iSCSI daemon will not start. ## If you change the InitiatorName, existing access control lists ## may reject this initiator. The InitiatorName must be unique ## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames. InitiatorName=iqn.1987-05.com.cisco:01.oracle-3
For this example, the user editable portion of the file follows the characters :01.. Try running /sbin/iscsi-iname command several times to see which portions of the names are identical and which ones change every time. The portion of the name that changes is the random portion that can safely be changed to a name chosen by the administrator.
- Edit the /etc/iscsi.conf file
These are the most common options storage vendors recommend for the iscsi.conf configuration file. Administrators should check with their iSCSI storage hardware vendors to determine if these options are correct for their hardware. These options are already included in the example file provided by the iscsi-initiator-utils package. Simply uncomment the appropriate lines and add any necessary values to make them active. Note that the IP address of the iSCSI target being used must be added on the DiscoveryAddress line. Administrators should use multiple DiscoveryAddress lines if there are targets on multiple machines.
Continuous=no HeaderDigest=never DataDigest=never ConnFailTimeout=180 ImmediateData=yes DiscoveryAddress=<IP address of target>
This file also stores security settings. If incoming, outgoing or bi-directional security is desired, modify these lines in the iscsi.conf file to enable security:
OutgoingUsername=<username> OutgoingPassword=<password> IncomingUsername=<anotherusername> IncomingPassword=<anotherpassword>
Note that the username and password used for incoming security cannot match the username and password used for outgoing security. See the related articles section at the bottom of the page for more information about setting security parameters.
- Create the LUN(s)
Using the directions provided by the iSCSI target hardware vendor, set up space on the storage system to be exported as an iSCSI LUN to the Red Hat Enterprise Linux system.
- Create a LUN masking group
Most iSCSI targets allow for LUN masking, which permits only certain LUNs to be visible from any particular host. This simplifies storage management by preventing systems from seeing LUNs that are unrelated to that system. Follow the hardware vendor's directions for creating a LUN masking group to ensure that LUNs are only seen by systems that should have access to them.
- Add appropriate LUNs to LUN masking group
Add the LUNs created in step one to the LUN masking group created in step two.
- Add initiator names to LUN masking group
Add the initiator name(s) of the appropriate systems to the LUN masking group.
- Set up security (optional)
Set up security on the target to match the security settings in the /etc/icsi.conf file on the initiator system. Security is optional but security settings on the target must match those on the initiator.
Each type of target has its own configuration parameters and method for setting up LUNs, but there is a common set of tasks on all targets that must be performed to prepare storage for export as an iSCSI LUN.
Initiator configuration, Part 2
- Start the iSCSI initiator service and configure it to start automatically on boot:
service iscsi start
chkconfig iscsi --level 2345 on
It may take a few moments for the service to start. This is normal behavior.
- Scan for LUNs:
- Create new partitions on the LUNs:
- Create filesystems on the LUNs, using disk labels
mke2fs -j -L <label> /dev/sdXY
It is extremely important to use disk labels when working with iSCSI storage. Delays in network traffic may cause the LUNs to be discovered in a different order the next time the system is booted. The only way to guarantee that the correct partition will be mounted at the proper mount point is to use disk labels to identify them.
- Edit the /etc/fstab file to enable the iSCSI LUNs to mount at boot time
The following example shows /etc/fstab entries for two iSCSI LUNs:
#device mount point FS Options Backup fsck LABEL=data1 /mnt/data1 ext3 _netdev 0 0 LABEL=data2 /mnt/data2 ext3 _netdev 0 0
Note the special _netdev option needed for mounting iSCSI storage
- Mount the partitions:
For more information about iSCSI, please see the related articles section at the bottom of this page.
How can multiple systems access a single iSCSI LUN?
by Gary Case
It is important to remember that iSCSI, like other SAN access methods, provides no file locking mechanism. Mounting the same LUN on multiple systems simultaneously with no provision for file locking will cause data corruption! If more than one system needs to access the data on an iSCSI LUN, GFS, NFS, or Samba should be used to properly share the data.
How can a LVM on a SAN be used to provide more partitions than the 15 partitions per disk limit?
by Jennifer Bramble
Since LVM lets a block device be divided up into smaller block devices (or combine them in mirrors) the LUN can be made into a physical volume in a volume group, and then the logical volumes can be created in that physical volume.
It won't allow an unlimited number of partitions, but it will allow more than the physical disk partition limit of 15.
How do I configure Postfix to use Simple Authentication Security Layer (SASL) with Transport Layer Security (TLS) on Red Hat Enterprise Linux 4?
by Sam Folk-Williams
Using Postfix with SASL allows an email administrator to force users who access the mail server from an untrusted location to authenticate with their user name and password. Adding TLS provides encryption for the password transaction. This is a primary method of protecting the mail server from being used by spammers while simultaneously protecting user account information. Note that this article pertains only to the SMTP (outgoing) connection; please see other articles in the Knowledgebase for information on using TLS with POP and IMAP (incoming) connections.
This article assumes that Postfix is already properly configured to send and receive email without SASL and TLS. If this is not the case, see other articles in the Knowledgebase for how to set up Postfix from scratch.
Setting up SASL
In order to add SASL support to a Postfix configuration, the cyrus-sasl package must be installed:
After the cyrus-sasl is installed, SASL users must be created for each mail user. This is done with the saslpasswd2 command. Note that all SASL users should already exist as normal system users, and the SASL password should be the same as for their system account.
[root@localhost ~]# saslpasswd2 -c frank Password: Again (for verification):
The SASL users are contained in a database file, /etc/sasldb2. This file must have the group changed to "postfix":
chown :postfix /etc/sasldb2
After the users are created two configuration files need to be updated. The /usr/lib/sasl2/smtpd.conf file should look as follows:
[root@localhost ~]# cat /usr/lib/sasl2/smtpd.conf pwcheck_method: auxprop
Setting up TLS
In order to use TLS, an SSL certificate chain must be created. Please see other articles in the Knowledgebase for how to create the necessary SSL certificates. This article assumes that the system has the following certificate files:
/usr/share/ssl/certs/key.pem /usr/share/ssl/certs/cert.pem /usr/share/ssl/certs/cacert.pem
Configuring Postfix for TLS and SASL
The /etc/postfix/main.cf file needs to have the following lines added:
#### SASL bits #### smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options=noanonymous ## The following allows anyone who is in mynetworks, or anyone who can authenticate, to send mail through this server smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination, permit_mynetworks check_relay_domains smtpd_delay_reject = yes ## this is necessary for some email clients broken_sasl_auth_clients = yes #### TLS bits #### smtpd_tls_auth_only = no smtp_use_tls = yes smtpd_use_tls = yes smtp_tls_note_starttls_offer = yes ## Location of key, cert and CA-cert. ## These files need to be generated using openssl smtpd_tls_key_file = /usr/share/ssl/certs/key.pem smtpd_tls_cert_file = /usr/share/ssl/certs/cert.pem smtpd_tls_CAfile = /usr/share/ssl/certs/cacert.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_exchange_name = /var/run/prng_exch tls_random_source = dev:/dev/urandom tls_smtp_use_tls = yes ipv6_version = 1.25
Postfix must be reloaded or restarted after making these changes. Note that in older versions of Postfix it was necessary to run the daemon. With Postfix 2.1 and greater, this is no longer necessary, as Postfix can access the file directly.
In order to test the TLS, open a telnet session on port 25:
[root@localhost ~]# telnet localhost 25 Trying 127.0.0.1... Connected to localhost.localdomain (127.0.0.1). Escape character is '^]'. 220 localhost.localdomain ESMTP Postfix ehlo localhost <-- YOU TYPE THIS 250-localhost.localdomain 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS <--THIS INDICATES TLS IS RUNNING 250-AUTH GSSAPI DIGEST-MD5 CRAM-MD5 PLAIN LOGIN NTLM <-- THESE LINES INDICATE AUTH TYPES 250-AUTH=GSSAPI DIGEST-MD5 CRAM-MD5 PLAIN LOGIN NTLM <-- 250 8BITMIME
For an example of a fully functional /etc/postfix/main.cf file, please see: http://people.redhat.com/sfolkwil/main.cf.
How do I open the telnet port in the Dell Remote Access Card (DRAC) for Red Hat Enterprise Linux 4's fencing agent to be able to access it?
by Michael Napolis
Dell Remote Access Card (DRAC) provides remote access to controlling power to a server. Red Hat Enterprise Linux 4's fencing agent logs into the DRAC through the telnet interface of the card. fence_drac agent can reboot, power off, power on and check the power status of the server.
However, by default the telnet interface of the Dell Remote Access Card (DRAC) is not enabled. Install the racser-devel rpm from Dell and then execute the commands below to enable the said interface:
racadm config -g cfgSerial -o cfgSerialTelnetEnable 1 racadm racreset
Why does my Dell Poweredge 6850 hang at "Loading megaraid_mbox driver" when I attempt to boot or install Red Hat Enterprise Linux 4 x86_64?
by Jon Fautley
Release Found: Red Hat Enterprise Linux 4 Update 2 and 3 x86_64
This problem has been seen in Red Hat Enterprise Linux 4 update 2 and 3 on the x86_64 architecture.
The Dell PowerEdge 6850 hangs with the error:
Loading megaraid_mbox driver...
Pass the options "acpi=off nousb" to the kernel at boot time on an installed system:
- If the system cannot boot or hangs without continuing, go to linux rescue mode by booting into the first installation CD and on the bot prompt, type:
boot: linux rescue
- Let the system go through the choices and it will drop down to the bash prompt. Mount the filesystems by executing:
- Edit the /etc/grub.conf with a text editor.
- Add acpi=off nousb to the kernel line. For example:
ctitle Red Hat Enterprise Linux AS (2.6.9-25.EL) root (hd0,0) kernel /vmlinuz-2.6.9-25.EL ro root=/dev/VolGroup00/LogVol00 rhgb quiet acpi=off nousb initrd /initrd-2.6.9-25.EL.img
- Save the file and reboot.
In an installation, pass the option off at the boot prompt. At the boot prompt, type:
boot: linux acpi=off nousb
This will prevent the hang from occuring and allow the system to install the 64-bit version of Enterprise Linux on the Dell Poweredge 6850.
Note:On large SMP system, specify "acpi=off" at the boot line to complete the installation. However, depending on the system, there may not be a need to add "acpi=off" to the kernel line in the bootloader config file ( /etc/grub.conf). If it seems that the kernel is ignoring the hyperthreading capabilities of the CPUs (ie., it's only seeing half of the logical CPUs), remove "acpi=off" from the kernel line and reboot the system.
The information provided in this article is for your information only. The origin of this information may be internal or external to Red Hat. While Red Hat attempts to verify the validity of this information before it is posted, Red Hat makes no express or implied claims to its validity.
This article is protected by the Open Publication License, V1.0 or later. Copyright © 2004 by Red Hat, Inc.