ProductsDesktop Server For Scientific Computing For IBM POWER For IBM System z For SAP Business Applications Red Hat Network Satellite ManagementExtended Update Support High Availability High Performance Network Load Balancer Resilient Storage Scalable File System Smart Management Extended Lifecycle SupportWeb Server Developer Studio Portfolio Edition JBoss Operations Network FuseSource Integration Products Web Framework Kit Application Platform Data Grid Portal Platform SOA Platform Business Rules Management System (BRMS) Data Services Platform Messaging JBoss Community or JBoss enterprise
SolutionsApplication development Business process management Enterprise application integration Interoperability Operational efficiency Security VirtualizationMigrate to Red Hat Enterprise Linux Systems management Upgrading to Red Hat Enterprise Linux JBoss Enterprise Middleware IBM AIX to Red Hat Enterprise Linux HP-UX to Red Hat Enterprise Linux Solaris to Red Hat Enterprise Linux UNIX to Red Hat Enterprise Linux Start a conversation with Red Hat Migration services
TrainingPopular and new courses JBoss Middleware Administration curriculum Core System Administration curriculum JBoss Middleware Development curriculum Advanced System Administration curriculum Linux Development curriculum Cloud Computing and Virtualization curriculum
ConsultingStandard Operating Environment (SOE) Strategic Migration Planning Service-oriented architecture (SOA) Enterprise Data Solutions Business Process Management
Issue #13 November 2005
- Beyond armchair quarterback: Getting involved in Fedora
- Focus on Fedora Extras
- Five Fedora books reviewed and rated
- The Sisyphus security dilemma
- Integrating applications into the desktop, part 2
- Linux and the desktop? Take the survey
- Video: Banche Popolari Unite
- Tuning Oracle Database 10g for ext3 file systems
- Securing your system with Snort
From the Inside
In each Issue
- Editor's blog
- Red Hat speaks
- Ask Shadowman
- Tips & tricks
- Fedora status report
- Magazine archive
Tips & tricks
Red Hat's customer service team receives technical support questions from users all over the world. As they are received, Red Hat technicians add the questions and answers to the Red Hat Knowledgebase on a daily basis. Individuals with a redhat.com login are granted access. Every month, Red Hat Magazine offers a preview into the Red Hat Knowledgebase by highlighting some of the most recent entries.
Tips from RHCEs
by Brad Smith, Red Hat Instructor
Did you know that you can do simple math on the command-line? Try something like the following:
[brad@satsuki ~]$ X=5 [brad@satsuki ~]$ Y=10 [brad@satsuki ~]$ echo $[ ($X+$Y)*2/10 ] 3
This can be really useful when you need to do a quick calculation but don't want to pull up a calculator app. Be warned, though, bash doesn't do decimals. Instead it rounds down to the nearest integer:
[brad@satsuki ~]$ echo $[ 3/2 ] 1
For more complicated math, you'll need to use a graphical tool like gnome-calculator or a more precise CLI calculator like bc:
[brad@satsuki ~]$ echo "3/2" | bc -l 1.50000000000000000000
Is it possible to extract a single file or a few files from a large tarball instead of untaring the entire tarball?
by Matthew Davis
The following command will extract a single file from a tarball:
# tar xf file.tar full/path/to/file
For example, the following tarball is generated:
# tar cvf pictures.tar pictures pictures/ pictures/seattle.jpg pictures/WG.jpg
And you want to extract seattle.jpg from the tarball without untarring the entire file set. The following command will extract it:
# tar xf pictures.tar pictures/seattle.jpg # ls -al pictures/seattle.jpg -rw-rw-r-- 1 user user 1458298 Jul 26 16:16 pictures/seattle.jpg
Note: You must provide the FULL path to the file you want to extract.
How do I use dm-crypt on LVM2 to create an encrypted PV (physical volume)?
by Bastien Nocera
Here is an example of how to use dm-crypt. DATA is an already existing the Volume Group.
- Create a Partition called CRYPTO using lvcreate:
[root@testmachine /]# lvcreate -n CRYPTO -L+100M DATA Logical volume "CRYPTO" created
- Create a crypto blockdevice on CRYPTO using cryptsetup:
[root@testmachine /]# cryptsetup create DMCRYPT /dev/DATA/CRYPTO Enter passphrase:
- Check the status with cryptsetup:
[root@testmachine /]# cryptsetup status DMCRYPT /dev/mapper/DMCRYPT is active: cipher: aes-plain keysize: 256 bits device: /dev/dm-6 offset: 0 sectors size: 204800 sectors
- Add a filesystem using mke2fs:
[root@testmachine /]# mke2fs /dev/mapper/DMCRYPT mke2fs 1.35 (28-Feb-2004) max_blocks 104857600, rsv_groups = 12800, rsv_gdb = 256 Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) 25688 inodes, 102400 blocks 5120 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67371008 13 block groups 8192 blocks per group, 8192 fragments per group 1976 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Writing inode tables: done inode.i_blocks = 3074, i_size = 67383296 Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 35 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
- Mount the filesystem and create a file:
[root@testmachine /]# mkdir /mnt/crypt; [root@testmachine /]# mount /dev/mapper/DMCRYPT /mnt/crypt [root@testmachine /]# cd /mnt/crypt [root@testmachine crypt]# ls -al total 17 drwxr-xr-x 3 root root 1024 Feb 16 12:32 . drwxr-xr-x 4 root root 4096 Feb 16 12:33 .. drwx------ 2 root root 12288 Feb 16 12:32 lost+found [root@testmachine crypt]# df -h Filesystem Size Used Avail Use% Mounted on [...] /dev/mapper/DMCRYPT 97M 1.6M 91M 2% /mnt/crypt [root@testmachine crypt]# touch FOOBAR [root@testmachine crypt]# ls -al total 17 drwxr-xr-x 3 root root 1024 Feb 16 13:42 . drwxr-xr-x 4 root root 4096 Feb 16 12:33 .. -rw-r--r-- 1 root root 0 Feb 16 13:42 FOOBAR drwx------ 2 root root 12288 Feb 16 12:32 lost+found [root@testmachine crypt]# cd ..
- Now unmount it and remove the crypto block device from the mapper -> the device is now safe.
[root@testmachine mnt]# umount /mnt/crypt [root@testmachine mnt]# cryptsetup remove DMCRYPT
The following can also be done after reboot.
- Re-create the block device and see what happens when a wrong password is being given:
[root@testmachine mnt]# cryptsetup create DMCRYPT /dev/DATA/CRYPTO Enter passphrase: <----------- WRONG PASSPHRASE! [root@testmachine mnt]# mount /dev/mapper/DMCRYPT /mnt/crypt mount: you must specify the filesystem type
- Try again with the correct password:
[root@testmachine mnt]# cryptsetup remove DMCRYPT [root@testmachine mnt]# cryptsetup create DMCRYPT /dev/DATA/CRYPTO Enter passphrase: [root@testmachine mnt]# mount /dev/mapper/DMCRYPT /mnt/crypt [root@testmachine mnt]# ls -al /mnt/crypt/ total 17 drwxr-xr-x 3 root root 1024 Feb 16 12:32 . drwxr-xr-x 4 root root 4096 Feb 16 12:33 .. -rw-r--r-- 1 root root 0 Feb 16 13:42 FOOBAR drwx------ 2 root root 12288 Feb 16 12:32 lost+found
How do I disable Hyper-Threading if GRUB is the boot loader?
by Matthew Davis
During the boot sequence you can add an option to the kernel line.
- In the GRUB menu, select the kernel to boot into.
- Type 'e' to modify the kernel arguments before booting.
- Add a space then type 'noht' at the end of the line.
- Hit return to boot with this options.
- Type 'b' to boot to this kernel.
- Edit the /etc/grub.conf file.
- Find the kernel line that you want to modify.
- Add noht to the kernel line. For example:
kernel /vmlinuz-2.4.21-15.EL ro root=LABEL=/ noht
For more information on Hyper-Threading technology, see the following: http://www.intel.com/technology/hyperthread/.
How do I configure a swap partition as a dump device for diskdump for Red Hat Enterprise Linux 4 Update 2?
by Akira Imamura
Release Found: Red Hat Enterprise Linux 4 Update 2
The README file in the diskdumputils-1.1.9 RPM should not be referred in order to configure a swap partition for diskdump. Otherwise the partition becomes disabled. The inappropriate README file leads users to corrupt the swap partition because the instruction makes them format a swap partition. Dump devices except for swap partitions always require service diskdump initialformat in order to use them as diskdump dedicated devices. In other words, swap partitions should not be formatted. If you want further information about that, see the "Setup" section in the latest README file.
The diskdumputils package is installed on the machine that you wish to capture dumps on, in the event of a system panic. It loads and configures the diskdump kernel modules so that if the machine crashes, the memory dump will be dumped to disk.
Diskdump is only supported with the following storage adapters:
Red Hat Enterprise Linux 3 Red Hat Enterprise Linux 4 --------------------------------- aic7xxx aic7xxx aic79xx aic79xx dpt_i2o ipr megaraid2 megaraid mptfusion mptfusion sym53c8xx sym53c8xx sata_promise sata_promise ata_piix ata_piix CCISS
Disk dump is supported in the following Red Hat kernels, where <kernel-version> is the version containing this diskdumputils package:
Red Hat Enterprise Linux 3 kernel-<kernel-version>.i686.rpm kernel-smp-<kernel-version>.i686.rpm kernel-hugemem-<kernel-version>.i686.rpm kernel-<kernel-version>.athlon.rpm kernel-smp-<kernel-version>.athlon.rpm kernel-<kernel-version>.ia64.rpm kernel-<kernel-version>.x86-64.rpm kernel-smp-<kernel-version>.x86-64.rpm kernel-smp-<kernel-version>.ia32e.rpm Red Hat Enterprise Linux 4 kernel-<kernel-version>.i686.rpm kernel-smp-<kernel-version>.i686.rpm kernel-hugemem-<kernel-version>.i686.rpm kernel-<kernel-version>.ia64.rpm kernel-<kernel-version>.ppc64.rpm kernel-<kernel-version>.x86-64.rpm kernel-smp-<kernel-version>.x86-64.rpm
Dump Device Selection
The first step in the configuration process is to designate a disk device to dump memory to in the event of a system crash. The dump device may be any of the following:
- a full disk device (e.g. /dev/sda)
- a partition of a disk device (e.g. /dev/sda4)
- a swap partition -Red Hat Enterprise Linux 4 only - (e.g. /dev/sda2 or /dev/VolGroup00/LogVol01)
If you configure dumping to the swap partition, it is required that both /usr and /var must be mounted locally; for reasons described in the remainder of this paragraph.
In the event of a system crash, the memory contents are saved to /var/crash. When the system reboots, the diskdumputils commands are run to preserve the saved memory, which it must read off /var/crash. The diskdump commands themselves are mounted under /usr. This memory saving operation is run in the boot sequence prior to both enabling swap and mounting remote filesystems. If /usr and /var were mounted remotely, the diskdump service would fail because remote file systems are usually mounted later than the swap initialization in rc.sysinit.
The size of dump device should be large enough to save the whole dump. The dump size to be written consists of the size of whole physical memory plus a header field. To determine the exact size required, refer to the output of /proc/diskdump after the diskdump module is loaded:
# modprobe diskdump # cat /proc/diskdump # sample_rate: 8 # block_order: 2 # fallback_on_err: 1 # allow_risky_dumps: 1 # total_blocks: 262042
The total block size is shown by page-size units, so in this example, the selected device must contain at least (262042 * 4096) bytes on an i386 machine.
Note: During a diskdump operation, memory contents residing on the swap partition are not preserved. Therefore the dump partition size corresponds to physical memory; rather than physical memory plus the size of the swap partition.
Next, based on the information above, consider which devices you select as a dump device. To do that, follow the instructions below.
Edit /etc/sysconfig/diskdump appropriately in the following format to register a dump device:
Multiple dump devices can be registered in a colon-separated format like:
The benefit of designating more than one dump device is redundancy. For example, if each dump device was controlled by a different driver, even if a system panic occurred in a driver that controls one of the registered devices, the memory could be dumped out using the other registered device. In this case it is required that each dump device be sufficiently large to store the full dump. Multiple dump devices are not supported if you are dumping to a swap device. Consequently, designating both a swap device and dedicated dump partition is not allowed.
Dump Device Formatting
Note: Skip this step if you are dumping to the swap partition.
The second step in the configuration process involves formatting the dump device.
Any dump device other than a swap partition which is registered as a dump device needs to be specially formatted for diskdump before being used. Accordingly, the designated dump partition cannot be used to create a conventional filesystem on it.
The dump device formatting needs to be done once by the system administrator. (Note: This step must be skipped if you configured a swap partition as a dump device. Otherwise the swap partition becomes unusable for swapping because it is formatted as a diskdump-dedicated device.):
# service diskdump initialformat
Enable Diskdump Service
Lastly, start the diskdump service:
# chkconfig diskdump on # service diskdump start
The registered device/partition can be referred through /proc/diskdump interface.
# cat /proc/diskdump /dev/sde1 514080 1012095
If the registered dump device needs to be replaced, edit /etc/sysconfig/diskdump. Format the new dump device as described above. Then restart the diskdump service. To restart the service, run the command below.
# service diskdump restart
To test the diskdump functionality, use Alt-SysRq-C or echo c > /proc/sysrq-trigger. After completing the dump, a vmcore file will be created during the next reboot sequence and saved in a directory with a name of the name format:
The vmcore file's format is same as that created by the netdump facility, so you can use the crash(8) command to analyze it
Once you set up, it is not necessary to do anything after that. After the initial configuration process there are no additional steps required. Be sure to keep the designated dump partition to be sufficiently large. If there is not enough space, the dump file will be partially saved; resulting in an incomplete dump file named vmcore-incomplete.
Diskdump currently contains one customizable script file called diskdump-nospace. The diskdump-nospace script is called prior to the creation of the vmcore file if /var/crash does not have enough space to hold the complete dumpfile. The script may be customized to clean up enough space for the dump in question to proceed.
The diskdump module has following module parameters:
block_order: Specifies the dump-time I/O block size. Default value is 2, which sets the I/O block size equal to page-size << 2, or 16 kbytes on an i386 machine. Larger values may make for better performance, but occupies more module memory.
sample_rate: Determine how many blocks in the dump partition are verified before actual memory dumping begins. Default value is 8, which means one of every 1<<8 (256) blocks are verified. Specifying zero means all blocks in the partition are verified, and a negative value disables verification.
dump_level: A memory collection level that specifies which memory pages will be dumped. Default value of 0 dumps all pages of physical RAM into the vmcore file. To avoid excessively large vmcore files, page cache pages, zero-filled pages, free pages, and user application pages may be eliminated from the file. Specifying one of the dump_level values from 1 to 15 will skip one or more memory page type(s) if that page type is marked with an X in the following table:
dump cache zero free user description level page page page page --------------------------------------------------------- 0 default 1 X 2 X 3 X X recommended 4 X 5 X X 6 X X 7 X X X 8 X 9 X X 10 X X 11 X X X 12 X X 13 X X X 14 X X X 15 X X X X minimum dump size
This partial dump feature provides a memory collection level that can select the amount of physical memory that is dumped. All of physical memory is usually not required to investigate a kernel issue. Most of physical memory typically contains user application data, page cache memory (file data), free memory pages, and zero-filled pages. By skipping one of more of those page types when creating the vmcore file, the crash dump will be significantly smaller, and the dump procedure less time-consuming. While the actual vmcore file size may vary because of the status of system and the dump_level specified, the minimum amount of data required to analyze the dump will always be captured. However, since there may be circumstances where it will be necessary to capture all of physical memory, it is not recommended that a dump partition size be less than the actual amount of physical memory.
Note that the partial dump feature has some risks. There are memory management lists which are scanned for a page's memory attribute, so if the list has been corrupted, the scanning process may fail. For example, when specifying a dump_level from 4-7 or from 12-15, the kernel's free page linked lists are scanned; if the list is corrupt, diskdump may hang. Furthermore, it is possible that a page type that has been skipped may be necessary to fully investigate the cause of some issues. Therefore, a memory collection level should be selected to suit each situation. The recommended level is 3, because it is easiest to determine whether a page is zero-filled or if it is a page cache page, and because no page lists need to be traversed.
The following option sets I/O block size to 32 kbytes, and verification is done on every block in the partition. Also, cache page and zero page are skipped by partial dump feature.
How do I make changes to the /etc/resolv.conf file permanent if the last changes I made were lost during a reboot?
by Archana Raghavan
Modify the /etc/sysconfig/network-scripts/ifcfg-eth<N> file and add the PEERDNS option. <N> can be 0, 1, 2, etc., but is referring to the network interface configuration file. For example, the first Ethernet device on the system is eth0 and the configuration file would be /etc/sysconfig/network-scripts/ifcfg-eth0.
Set the parameter PEERDNS to 'no'. For example:
DEVICE=eth0 BOOTPROTO=dhcp ONBOOT=yes TYPE=Ethernet PEERDNS=no
This option should ensure that the /etc/resolv.conf file is not reset after a reboot or the system.
How do I setup vsftpd on Red Hat Enterprise Linux 3 so only specific user(s) have access but can not use the system otherwise?
by Chris Evich
Release Found: Red Hat Enterprise Linux 3
You need to setup a secure FTP server with tight control over whom has access. You do not want anonymous access to the server and the user(s) should otherwise not be able to login. The user(s) need to be confined to a specific directory on the system. The specified user(s) accounts should not be able to receive any e-mail on the local system. You already have the relevant packages installed (i.e. vsftpd, sendmail, etc.) and have not otherwise modified their configurations. You are generally familiar with editing configuration files and have access to the root account.
Use the following steps:
- Make sure that the correct FTP services are enabled to start when the system boots. You can do this with the chkconfig vsftpd on command. Make sure the gssftp service is turned off with the chkconfig gssftp off command.
- Edit the /etc/vsftpd/vsftpd.conf file. Change the setting near the top to anonymous_enable=NO. Go to the end of the file and add the option chroot_local_user=YES.
- Add the user account(s) to the system using the useradd -d <directory> -s /sbin/nologin <username> command. Replace <username> with the name of the user to create. Replace <directory> with the directory you would like them confined to. Note: The <username> must be unique, the <directory> does not need to be unique.
- Define a password for the user account(s) using the passwd <username> command.
- Edit the /etc/vsftpd.user_list file and add the names of the user(s) needing access (one per line).
- Edit the /etc/aliases file and add (one line per user) <username> : /dev/null anywhere in the file. Note: This prevents any e-mail from being delivered locally to <username>'s account.
- Execute the newaliases command to sync the contents of /etc/aliases with the running mail server.
- Set ownership and permissions appropriately for the directory/directories specified above in useradd command. See other knowledgebase articles for more information on defining/setting permissions for users and groups.
- Start the vsftpd server service with the service vsftpd start command.
- Test logging in as the user(s) to verify they cannot access the system.
- Test accessing the FTP server as the user(s) with the ftp localhost command. Verify that the users are confined to the directory specified.
- Test any upload/download (or get/put) restrictions specific to your environment.
How do I restrict the users allowed to access the system via SSH with allow and deny directives?
by Michael Napolis
Using SSH's configuration file /etc/ssh/sshd_config, there are different ways to limit the allowed users.
The AllowUsers directive can be used to specify the users with SSH access. The directive is followed by a list of username patterns separated by spaces.
Meanwhile, in order to list the users that do not have SSH access, the DenyUsers directive is used. Both directives can use wildcards such as '*' and '?' in the username pattern. Unfortunately, the user's numerical user ID is not recognized. By default, login is allowed for all users.
To restrict groups, the option AllowGroups and DenyGroups are useful. The said options will allow or disallow users whose primary group or supplementary group matches one of the group patterns.
Listed below is an example of a /etc/ssh/sshd_config file denying SSH access for user haller:
# This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. #Port 22 #Protocol 2,1 #ListenAddress 0.0.0.0 #ListenAddress :: # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 3600 #ServerKeyBits 768 # Logging #obsoletes QuietMode and FascistLogging #SyslogFacility AUTH SyslogFacility AUTHPRIV #LogLevel INFO # Authentication: #LoginGraceTime 120 #PermitRootLogin yes #StrictModes yes #RSAAuthentication yes #PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys # rhosts authentication should not be used #RhostsAuthentication no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #AFSTokenPassing no # Kerberos TGT Passing only works with the AFS kaserver #KerberosTgtPassing no # Set this to 'yes' to enable PAM keyboard-interactive authentication # Warning: enabling this may bypass the setting of 'PasswordAuthentication' #PAMAuthenticationViaKbdInt no #X11Forwarding no X11Forwarding yes #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes #PrintLastLog yes #KeepAlive yes #UseLogin no UsePrivilegeSeparation no #PermitUserEnvironment no Compression no #MaxStartups 10 # no default banner path #Banner /some/path #VerifyReverseMapping no #ShowPatchLevel no # override default of no subsystems Subsystem sftp /usr/libexec/openssh/sftp-server DenyUsers haller
After editing the /etc/ssh/sshd_config file, restart the sshd service:
service sshd restart
Another way to limit SSH users is to use Pluggable Authentication Module (PAM). See additional articles in the Knowledgebase on how to limit who can use SSH based on a list of users.
Why am I getting "read only filesystem" error messages when the filesystem is mounted rw?
by Demosthenes Mateo
If the kernel finds fatal corruption on the disk or if certain key IOs like journal writes start failing, the kernel may remount the filesystem as read-only. This is because the filesystem can no longer maintain write integrity under these conditions. Any such behavior will be thoroughly logged in /var/log/messages.
Should this happen, backup your recent data as this may be a symptom of an impending disk failure. Perform filesystem checks on the disk using e2fsck as soon as possible and use the -c option to enable badblock checking. The normal fsck may not detect all the errors and return clean. For example:
e2fsck -c /dev/sda3
The information provided in this article is for your information only. The origin of this information may be internal or external to Red Hat. While Red Hat attempts to verify the validity of this information before it is posted, Red Hat makes no express or implied claims to its validity.
This article is protected by the Open Publication License, V1.0 or later. Copyright © 2004 by Red Hat, Inc.