Different performance

Tina Tian tinatianxia at hotmail.com
Mon May 12 22:21:50 UTC 2008


Bill,
 
Belows are the output from dmesg. host2 is "write-through". Is host 1 really "write-through"?
 
Host1:
scsi0 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14        <Adaptec 39320A Ultra320 SCSI adapter>        aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBs
scsi1 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14        <Adaptec 39320A Ultra320 SCSI adapter>        aic7902: Ultra320 Wide Channel B, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBs
megasas: 00.00.02.03 Mon Jan 30 16:30:45 PST 2006megasas: 0x1028:0x0015:0x1028:0x1f03: bus 2:slot 14:func 0ACPI: PCI interrupt 0000:02:0e.0[A] -> GSI 142 (level, low) -> IRQ 209scsi2 : LSI Logic SAS based MegaRAID driver  Vendor: DP        Model: BACKPLANE         Rev: 1.00  Type:   Enclosure                          ANSI SCSI revision: 05  Vendor: DELL      Model: PERC 5/i          Rev: 1.00  Type:   Direct-Access                      ANSI SCSI revision: 05SCSI device sda: 142082048 512-byte hdwr sectors (72746 MB)sda: asking for cache data failedsda: assuming drive cache: write through sda: sda1 sda2 sda3 sda4 < sda5 >Attached scsi disk sda at scsi2, channel 2, id 0, lun 0  Vendor: DELL      Model: PERC 5/i          Rev: 1.00  Type:   Direct-Access                      ANSI SCSI revision: 05SCSI device sdb: 1169686528 512-byte hdwr sectors (598880 MB)sdb: asking for cache data failedsdb: assuming drive cache: write through sdb: sdb1 sdb2 < >Attached scsi disk sdb at scsi2, channel 2, id 1, lun 0
 
 
Host2:
scsi0 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14        <Adaptec 39320A Ultra320 SCSI adapter>        aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBs
scsi1 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14        <Adaptec 39320A Ultra320 SCSI adapter>        aic7902: Ultra320 Wide Channel B, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBs
megasas: 00.00.02.03 Mon Jan 30 16:30:45 PST 2006megasas: 0x1028:0x0015:0x1028:0x1f03: bus 2:slot 14:func 0ACPI: PCI interrupt 0000:02:0e.0[A] -> GSI 142 (level, low) -> IRQ 209scsi2 : LSI Logic SAS based MegaRAID driver  Vendor: DP        Model: BACKPLANE         Rev: 1.05  Type:   Enclosure                          ANSI SCSI revision: 05  Vendor: DELL      Model: PERC 5/i          Rev: 1.03  Type:   Direct-Access                      ANSI SCSI revision: 05SCSI device sda: 142082048 512-byte hdwr sectors (72746 MB)SCSI device sda: drive cache: write through sda: sda1 sda2 sda3 sda4 < sda5 >Attached scsi disk sda at scsi2, channel 2, id 0, lun 0  Vendor: DELL      Model: PERC 5/i          Rev: 1.03  Type:   Direct-Access                      ANSI SCSI revision: 05SCSI device sdb: 1169686528 512-byte hdwr sectors (598880 MB)SCSI device sdb: drive cache: write through sdb: sdb1Attached scsi disk sdb at scsi2, channel 2, id 1, lun 0  Vendor: DELL      Model: PERC 5/i          Rev: 1.03  Type:   Direct-Access                      ANSI SCSI revision: 05SCSI device sdc: 584843264 512-byte hdwr sectors (299440 MB)SCSI device sdc: drive cache: write through sdc: sdc1 sdc2Attached scsi disk sdc at scsi2, channel 2, id 2, lun 0
Thanks,
Tina


From: bill at magicdigits.comTo: redhat-sysadmin-list at redhat.comDate: Mon, 12 May 2008 14:09:32 -0700Subject: RE: Different performance



Tina,
Is there any chance that host 1 is set to "write-through" and host 2 is set to dirty (trust me) writes on the hardware disk controller or on the database driver? Note how host 1 waits until all reading is done prior to the first write - host 2 has simultaneous read/writing.
 
Bill Watson
bill at magicdigits.com


-----Original Message-----From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Monday, May 12, 2008 2:00 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performanceI just did two tests. Belows are the vmstat output from the tests. Test 1:  dd if=/dev/zero of=dd_out.out bs=1MB count=700 ( dd_out.out creates a file on the same folder of database file). As suggested by Doungwu.             vmstat 2 20 on host 1: 
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
 0  0    168 622076  76312 1279844    0    0     1     0    0     0  0  0 100  0
 0  0    168 622076  76312 1279844    0    0     0     0 1003   133  0  0 100  0
 1  0    168 967084  76312  935864    0    0     0     0 1004   136  0  4 96  0 0  2    168 573380  76372 1320864    0    0     2 130386 1317   352  0  9 72 18          vmstat 2 20 on host 2:
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
 0  0  11716 699612 229432  723212    0    0     1     0    1     0  1  1 98  0
 2  1  11716 358916 229748 1060636    0    0     0 27648  1057   457  0  7 88  5
 0  1  11716  11328 230080 1402984    0    0     0 141598 1205   512  0  9 72 20
  Test 2: load a ascii data file to the sybase databae on host1 and host2.          vmstat 2 20 on host 1:
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
 0  0    168 1106604  78268 797408    0    0     1     0    0     0  0  0 100  0
 2  0    168 1104076  78300 799456    0    0  1092    58 1034   175  2  1 96  1
 2  0    168 1090124  78316 813480    0    0  6984     0 1102   181 16  9 75  0
 2  0    168 1076108  78332 827244    0    0  6984     0 1101   182 16  9 75  0
 0  1    168 1070044  78368 832928    0    0  2738  1168 1187   400  6  2 82 10
 0  1    168 1069980  78372 832924    0    0     2  1388 1177   364  0  0 87 12
 0  1    168 1070012  78380 832916    0    0     0  1494 1190   392  0  0 88 12
 0  1    168 1070068  78388 832908    0    0     0  1382 1175   361  0  0 87 12
 0  1    168 1070068  78388 832908    0    0     0  1324 1168   345  0  0 87 12
 0  1    168 1069996  78396 832900    0    0     0  1426 1181   373  0  0 88 12
 0  1    168 1069996  78396 832900    0    0     0  1488 1188   387  0  0 87 12
 0  1    168 1070052  78404 832892    0    0     0  1422 1181   374  0  0 87 12
 0  1    168 1070052  78404 832892    0    0     0  1316 1167   343  0  0 88 12
 0  0    168 1070428  78404 832892    0    0     0   690 1093   245  0  0 94  6         vmstat 2 20 on host 2:
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
 1  0  11716   8156 230820 1412904    0    0     1     0    1     0  1  1 98  0
 0  0  11716   9784 230820 1412904    0    0     0     0 1008  1078  0  0 99  0
 2  0  11716   4680 230852 1417292    0    0  2328    44 1061   462  5  3 92  1
 2  0  11716   4440 230216 1417928    0    0  6936  3330 1515  1309 15  9 74  1
 2  0  11716   4824 226236 1421908    0    0  6862  5060 1736  1740 16  9 75  1
 0  0  11716   4676 219492 1429432    0    0  1912  4786 1636  1615  5  3 92  0
 0  0  11716   6884 217968 1428616    0    0    64     6 1015  1100  2  0 97  1  Can you see something special? Thank you,Tina 


Date: Mon, 12 May 2008 13:59:43 -0500From: jolt at ti.comTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance







Tina,
 
Could you run a vmstat output while under load to see how much memory is swapping and how quickly context switching is occurring?  “vmstat 5 20”
Also, what kernel is running?  “uname –a”
 




From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Monday, May 12, 2008 1:37 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance
 
Thank you, Joseph. Let me explain it. On both host 1 and host2, sybase software is in /sybase and sybase database is in /sybasedata. On host 2, we have amada backup software in /dev/sdc and I believe some amada demon was running when I ran iostat. (> From the output of host 2 you provided, the first stat shows sdc is taking some of the load). Host 2 do have additional higher performance drivers which are not being used by sybase database (/sybasedata) at all. Will database be benefit from their quicker swap? Belows are results from fdisk/mount/dmesg(swap) on host1 and host2.  Host 1, fdisk -l:-----------------Disk /dev/sda: 72.7 GB, 72746008576 bytes255 heads, 63 sectors/track, 8844 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes   Device Boot      Start         End      Blocks   Id  System/dev/sda1               1           4       32098+  de  Dell Utility/dev/sda2               5        1279    10241437+  83  Linux/dev/sda3   *        1280        1406     1020127+  83  Linux/dev/sda4            1407        8844    59745735    5  Extended/dev/sda5            1407        8844    59745703+  8e  Linux LVMDisk /dev/sdb: 598.8 GB, 598879502336 bytes255 heads, 63 sectors/track, 72809 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes   Device Boot      Start         End      Blocks   Id  System/dev/sdb1               1       66868   537117178+  83  Linux/dev/sdb2           66869       72809    47721082+   5  Extended host 1, mount:---------------/dev/mapper/VolGroup_ID_27777-LogVol2 on / type ext3 (rw)none on /proc type proc (rw)none on /sys type sysfs (rw)none on /dev/pts type devpts (rw,gid=5,mode=620)usbfs on /proc/bus/usb type usbfs (rw)/dev/sda3 on /boot type ext3 (rw)none on /dev/shm type tmpfs (rw)/dev/mapper/VolGroup_ID_27777-LogVol3 on /tmp type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVol6 on /usr type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVol5 on /var type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolHome on /home type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolSybase on /sybase type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolTranLog on /tranlog type ext3 (rw)/dev/sdb1 on /sybasedata type ext3 (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)Host 1, dmesg|grep swap------------------------Adding 1998840k swap on /dev/VolGroup_ID_27777/LogVol1.  Priority:-1 extents:1Adding 2097144k swap on /dev/VolGroup_ID_27777/LogVol0.  Priority:-2 extents:1 Host 2, fdisk -l----------------Disk /dev/sda: 72.7 GB, 72746008576 bytes255 heads, 63 sectors/track, 8844 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes   Device Boot      Start         End      Blocks   Id  System/dev/sda1               1           4       32098+  de  Dell Utility/dev/sda2               5        1534    12289725   83  Linux/dev/sda3   *        1535        1661     1020127+  83  Linux/dev/sda4            1662        8844    57697447+   5  Extended/dev/sda5            1662        8844    57697416   8e  Linux LVMDisk /dev/sdb: 598.8 GB, 598879502336 bytes255 heads, 63 sectors/track, 72809 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes   Device Boot      Start         End      Blocks   Id  System/dev/sdb1   *           1       72809   584838261   83  LinuxDisk /dev/sdc: 299.4 GB, 299439751168 bytes255 heads, 63 sectors/track, 36404 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes   Device Boot      Start         End      Blocks   Id  System/dev/sdc1               1        4370    35101993+  83  Linux/dev/sdc2            4371       36404   257313105   83  LinuxDisk /dev/sdd: 320.0 GB, 320072933376 bytes255 heads, 63 sectors/track, 38913 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes   Device Boot      Start         End      Blocks   Id  System/dev/sdd1               1       38913   312568641   83  Linuxhost 2, mount:---------------/dev/mapper/VolGroup_ID_787-LogVol1 on / type ext3 (rw)none on /proc type proc (rw)none on /sys type sysfs (rw)none on /dev/pts type devpts (rw,gid=5,mode=620)usbfs on /proc/bus/usb type usbfs (rw)/dev/sda3 on /boot type ext3 (rw)none on /dev/shm type tmpfs (rw)/dev/mapper/VolGroup_ID_787-LogVol2 on /tmp type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVol5 on /usr type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVol4 on /var type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolHome on /home type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolSybase on /sybase type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolTranlog on /tranlog type ext3 (rw)/dev/sdb1 on /sybasedata type ext3 (rw)/dev/sdc1 on /pkgs type ext3 (rw)/dev/sdc2 on /amanda-data type ext3 (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)host 2 dmesg |grep swap:------------------------ host 1 : dmesg |grep swapAdding 1769464k swap on /dev/VolGroup_ID_787/LogVol0.  Priority:-1 extents:1 Best Regards,Tina



Date: Mon, 12 May 2008 07:11:12 -0500From: jolt at ti.comTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance

Tina,
 
How are the partitions laid out on the two systems?  It is likely that something OS related is accessing sda and sdb or host 1 while being spread across more disks in host 2.  From the output of host 2 you provided, the first stat shows sdc is taking some of the load.  Regardless of the RAM being the same in both systems, is there much swapping?  Swapping on higher performance drives will be quicker.
 
Regards,
 
Joseph
 




From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Friday, May 09, 2008 10:42 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance
 
The DB is Sybase ASE 15.0.2. Identical configuration on two hosts. My SA also confirmed that two hosts are almost identical except host2(faster DB load) has extra two disks sdc and sdd, sdc and sdd are with higer RPM=15k.   The rest of disks sda and adb are identical on two hosts, with RPM=7k.   On both host 1 and host 2, DBs are on /dev/sdb only.  Best Regards,Tina



To: redhat-sysadmin-list at redhat.comDate: Fri, 9 May 2008 16:58:19 -0600From: larry.sorensen at juno.comSubject: Re: Different performance

Please include information on the databases including versions. It could just be different configurations on the databases. Are the patches up to date and equal on both servers?

 

On Fri, 9 May 2008 14:11:25 -0700 Tina Tian <tinatianxia at hotmail.com> writes:


I am a DBA. I have identical database servers running on two Linux redhat 4, host 1 and host 2. When I was running the same bulk load to database (load a data file to database), host 2 was much faster than host 1.  On both host1 and host2, database are using file system mount on /dev/sda and /dev/sdb. I checked with my SA, host1 and host2 have same CPU, RAM, file system configuration. The only different is that host 2 has extra HD capacity with higher 15k RPM. But the extra 2 HDs(sdc and sdd) are dedicated to other applications, not used by database at all.  My questions are:-----------------1. On host2 (faster), the extra faster HDs(/dev/sdc and sdd) are not used by database. Does it still affect IO performance of /dev/sda and /dev/sdb ? 2. During database bulk load testing, host 1(slower) shows longer service IO time (svctm) and longer IO waiting time(await).     What other possible reason can cause this problem? Any idea? I did post the same issue to database discussion group and they suggested me to check OS performance(svctm).  Below is the result from iostat on host1(slower) and host2(faster) during bulk load: Host 1: iostat -x 2=====================
 
avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.15    0.00    0.07    0.28   99.49
 
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda          0.01   0.59  0.24  0.19   29.22    6.17    14.61     3.08    83.49     0.01   21.71   3.84   0.16
sdb          0.04  10.05  0.89  3.74  117.37  110.34    58.69    55.17    49.13     0.10   21.76   4.48   2.08
 
avg-cpu:  %user   %nice    %sys %iowait   %idle
          15.74    0.00    8.99    0.31   74.95
 
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda          1.99   0.00 57.71  0.00 14025.87    0.00  7012.94     0.00   243.03     0.21    3.58   3.53  20.35
sdb          0.00   0.00 11.94  0.00   95.52    0.00    47.76     0.00     8.00     0.02    2.04   2.04   2.44
 
avg-cpu:  %user   %nice    %sys %iowait   %idle
           6.18    0.00    2.37    9.24   82.20
 
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda          0.50   0.50 23.00  1.00 5732.00   12.00  2866.00     6.00   239.33     0.07    3.08   3.02   7.25
sdb          0.00 129.00  7.00 130.00   56.00 2076.00    28.00  1038.00    15.56     0.75    5.49   5.40  73.95
 
avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.06    0.00    0.12   12.44   87.38
 
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda          0.00   3.50  0.00  3.00    0.00   52.00     0.00    26.00    17.33     0.03   10.00   3.67   1.10
sdb          0.00 182.50  0.00 182.50    0.00 2920.00     0.00  1460.00    16.00     0.99    5.44   5.44  99.30
 
avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.00    0.00    0.12   12.49   87.38
 
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda          0.00   0.50  0.00  1.01    0.00   12.06     0.00     6.03    12.00     0.01    6.00   6.00   0.60
sdb          0.00 184.92  0.00 185.43    0.00 2962.81     0.00  1481.41    15.98     1.01    5.45   5.38  99.70
 
avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.00    0.00    0.06   12.43   87.51
 
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda          0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb          0.00 184.08  0.00 184.08    0.00 2945.27     0.00  1472.64    16.00     0.99    5.39   5.38  99.00
 
avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.00    0.00    0.12   12.31   87.56
 
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda          0.00   1.00  0.00  1.50    0.00   20.00     0.00    10.00    13.33     0.02   15.33   6.67   1.00
sdb          0.00 181.00  0.00 181.00    0.00 2896.00     0.00  1448.00    16.00     0.99    5.48   5.49  99.40
 
avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.00    0.00    0.19   12.37   87.45
 
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda          0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb          0.00 178.00  0.00 178.50    0.00 2852.00     0.00  1426.00    15.98     1.00    5.61   5.55  99.10
 
avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.00    0.00    0.12   12.37   87.51
 
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda          0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb          0.00 179.50  0.00 179.50    0.00 2872.00     0.00  1436.00    16.00     0.99    5.52   5.53  99.25
 
avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.00    0.00    0.06   12.44   87.50
 
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda          0.00   1.50  0.00  3.50    0.00   40.00     0.00    20.00    11.43     0.07   20.00   4.00   1.40
sdb          0.00 179.00  0.00 179.50    0.00 2868.00     0.00  1434.00    15.98     1.02    5.68   5.53  99.30
 
avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.06    0.00    0.19   12.41   87.34
 
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda          0.00   0.50  0.00  1.00    0.00   12.00     0.00     6.00    12.00     0.01    6.50   6.50   0.65
sdb          0.00 183.50  0.00 183.50    0.00 2936.00     0.00  1468.00    16.00     0.99    5.40   5.41  99.25
 
 
Host 2: iostat -x 2
==================
avg-cpu:  %user   %nice    %sys %iowait   %idle
           0.96    0.00    0.69    0.21   98.15
 
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
hda          0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00    48.00     0.00    1.33   1.33   0.00
sda          0.01   5.31  0.23  1.55   17.96   54.93     8.98    27.47    40.76     0.07   41.59   1.21   0.22
sdb          0.03   3.99  0.84  0.47  113.52   35.67    56.76    17.83   114.36     0.03   23.00   2.55   0.33
sdc          0.05  37.80  0.58  1.50  131.96  314.37    65.98   157.19   214.93     0.43  205.85   2.84   0.59
sdd          0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00    40.35     0.00    3.52   3.52   0.00
 
avg-cpu:  %user   %nice    %sys %iowait   %idle
          16.03    0.00    8.61    0.44   74.92
 
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
hda          0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda          1.99  14.43 57.71  6.97 13775.12  171.14  6887.56    85.57   215.63     0.22    3.43   3.36  21.74
sdb          0.00 357.71  7.96 358.71   63.68 5731.34    31.84  2865.67    15.80     0.04    0.10   0.10   3.83
sdc          0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdd          0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
 
avg-cpu:  %user   %nice    %sys %iowait   %idle
          15.62    0.00    8.81    0.56   75.00
 
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
hda          0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda          1.50   0.00 56.00  0.00 13964.00    0.00  6982.00     0.00   249.36     0.22    3.90   3.89  21.80
sdb          0.00 635.00  7.00 635.00   64.00 10160.00    32.00  5080.00    15.93     0.06    0.09   0.09   5.55
sdc          0.00   1.00  0.00  1.50    0.00   20.00     0.00    10.00    13.33     0.00    0.00   0.00   0.00
sdd          0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
 
 
Thanks,
Tina
 



Sign in and you could WIN! Enter for your chance to win $1000 every day. Visit SignInAndWIN.ca today to learn more! 

 
 



You could win $1000 a day, now until May 12th, just for signing in to Windows Live Messenger. Check out SignInAndWIN.ca to learn more!
 



Sign in to Windows Live Messenger, and enter for your chance to win $1000 a day—today until May 12th. Visit SignInAndWIN.ca

Sign in today. When you sign in to Windows Live Messenger you could win $1000 a day until May 12th. Learn more at SignInAndWIN.ca 
_________________________________________________________________
Enter today for your chance to win $1000 a day—today until May 12th. Learn more at SignInAndWIN.ca
http://g.msn.ca/ca55/215
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/redhat-sysadmin-list/attachments/20080512/62d24788/attachment.htm>


More information about the redhat-sysadmin-list mailing list