[rhn-users] LVM

Lamon, Frank III Frank_LaMon at csx.com
Mon Jun 26 23:13:41 UTC 2006


The problem isn't with LVM (that parts easy), the problem is that you changed the size of the drive (/dev/sda) under the nose of the OS. It looks like the OS sees the drive at the new size now (/dev/sda 146.5GB), but the partitions still reflect the previous sizes. You might be able to use parted to extend the partitions (4 & 5), but you might whack the OS when doing this. The next issue would be trying to get the PV /dev/sda5 to recognize the new size - not sure if it will do this or not. At that point you could just use lvextend to grow the LV that needed to be larger and then use ext2online to resize the filesystem on that LV on the fly. 

I'm trying to figure out your disk partition layout.
sda1 is obviously the Dell system partition
sda2 looks like a Window partition, but why would you have a Windows partition on a server
sda3 looks like /boot
sda4 is the rest of the disk (sda5 consists of sda4)
sda5 is your only LVM PV
I only see 2 LVs 
I guess one is / and the other is for your app - do you not have a swap space? Maybe the output of "cat /etc/fstab" would clear this up or the output of "df -hl"

You might really want to consider reloading this server and separating some of the stuff from / - especially /var so an out of control log file doesn't crash your server by filling up the root filesystem.




-----Original Message-----
From: rhn-users-bounces at redhat.com
[mailto:rhn-users-bounces at redhat.com]On Behalf Of Sead Dzelil (Student)
Sent: Monday, June 26, 2006 6:37 PM
To: Red Hat Network Users List
Subject: Re: [rhn-users] LVM


OK. Here is the output of these commands:

[root at ip023-8 ~]# fdisk -l

Disk /dev/sda: 146.5 GB, 146548981760 bytes
255 heads, 63 sectors/track, 17816 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1           9       72261   de  Dell Utility
/dev/sda2   *          10         270     2096482+   6  FAT16
/dev/sda3             271         283      104422+  83  Linux
/dev/sda4             284        8908    69280312+   5  Extended
/dev/sda5             284        8908    69280281   8e  Linux LVM

[root at ip023-8 ~]# pvscan
  /dev/cdrom: open failed: No medium found
  PV /dev/sda5   VG VolGroup00   lvm2 [66.06 GB / 64.00 MB free]
  Total: 1 [66.06 GB] / in use: 1 [66.06 GB] / in no VG: 0 [0   ]

[root at ip023-8 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda5
  VG Name               VolGroup00
  PV Size               66.06 GB / not usable 0
  Allocatable           yes
  PE Size (KByte)       32768
  Total PE              2114
  Free PE               2
  Allocated PE          2112
  PV UUID               akfgOT-b3oe-juDZ-0QGR-Y5Be-RezY-xjIZ8k

[root at ip023-8 ~]# lvdisplay
  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol00
  VG Name                VolGroup00
  LV UUID                gBNQtN-YtVB-6cTR-nnYs-ayxt-9Xwo-qpKVNy
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                64.06 GB
  Current LE             2050
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol01
  VG Name                VolGroup00
  LV UUID                Pnep7s-BlfT-VUED-9dHi-0yH9-z1Wa-59cM0L
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.94 GB
  Current LE             62
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:1

[root at ip023-8 ~]# vgdisplay
  --- Volume group ---
  VG Name               VolGroup00
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               66.06 GB
  PE Size               32.00 MB
  Total PE              2114
  Alloc PE / Size       2112 / 66.00 GB
  Free  PE / Size       2 / 64.00 MB
  VG UUID               x6mvEM-VYuO-muql-HqN7-jrTz-lr3n-a6d8k6

I hope you guys can help. Thanks in advance!

Sead




On Mon, 26 Jun 2006 18:29:18 -0400
 "Lamon, Frank III" <Frank_LaMon at csx.com> wrote:
> Lots of red flags all over the place here - converting a mirrored set
to a striped set on the fly sort of (it sounds like you haven't reloaded
the OS)? 
> 
> But let's see what you have now. Can you give us the output of the
following commands?
> 
> fdisk -l
> pvscan
> pvdisplay
> lvdisplay
> vgdisplay
> 
> 
> 
> 
> -----Original Message-----
> From: rhn-users-bounces at redhat.com
> [mailto:rhn-users-bounces at redhat.com]On Behalf Of Sead Dzelil (Student)
> Sent: Monday, June 26, 2006 6:16 PM
> To: gforte at udel.edu; Red Hat Network Users List
> Subject: Re: [rhn-users] LVM
> 
> 
> Thank you very much for taking the time to help me. I only have two 73GB
> hard drives right now and I need 100+GB of storage. I am not concerned
> about redundancy because the server is used for computations, not for
> important storage. Please help me out if you know your LVM. The computer
> sees the whole 146GB but the volume group is on only 73GB. What can I
> too to resize it and make the OS see the whole disk. Please help.
> 
> Thank You
> 
> On Mon, 26 Jun 2006 04:58:19 -0400
>  Greg Forte <gforte at leopard.us.udel.edu> wrote:
> > Wow, where to start ...
> > 
> > First of all, Travers: he's already got hardware raid, he said as
> much: "... went into the RAID BIOS ...".  It's built-in to the 6800
series.
> > 
> > Sead: your foremost problem is that you don't have enough disk space
> for any kind of meaningful redundancy if you need 100+ GB.  RAID0 isn't
> really RAID at all (unless you replace "redundant" with "risky") - RAID0
> stripes the data across N of N disks with no parity data, which means if
> one disk fails the whole system is gone.  Instantly.  It's basically
> JBOD with a performance boost due to multiplexing reads and writes.  To
> put it bluntly, no one in their right mind runs the OS off of a RAID0
> volume.
> > 
> > Beyond that, I'm surprised (impressed?) that the OS even still boots -
> after the conversion any data on the disks should be scrap.  Maybe the
> newer Dell RAID controllers are able to convert non-destructively.  I'll
> assume that's true, in which case the reason the OS doesn't see the
> difference is because you still need to change both the partition size
> (in this case, the logical volume extent size) and the filesystem
> itself.  In which case you COULD theoretically use lvextend to enlarge
> the LVM volume, and then resize2fs to grow the filesystem (assuming it's
> ext2/3, which it almost definitely is).  BUT, there's still the problem
> I mentioned above.
> > 
> > The first thing you need to do is fix the physical disk problem.
> Depending on how the machine is configured, this may be easy or hard.
> > A 6800 has 10 drive slots on the main storage backplane (the bays on
> the right), and if the two existing drives are on that backplane then it
> _should_ be a simple matter of buying a third 73GB disk, installing it,
> going into the RAID BIOS and converting again to RAID5 (assuming it can
> also do that conversion without trashing the disks - I'm guessing it can
> if it did RAID1 to RAID0), and then doing lvextend and resize2fs as
> described above (I know, you want more detail, but you need the disk
> first ;-)
> > 
> > BUT ... I'm gonna go out on a limb and guess that the machine was
> configured with the 1x2 secondary backplane in the peripheral bay area
> on the left.  If that's the case, then you're not going to be able to
> add a third disk in that area, and I don't think you can configure a
> raid with disk members on different backplanes - and even if you can,
> I'd guess the 10 bays in the main storage are all filled, or it wouldn't
> be configured with the extra backplane to begin with.  You'd have to
> check with Dell tech support about that, to be sure.  But assuming all
> of my guesses are right, the only option left is going to be to buy two
> larger disks and configure them for RAID1, just like the two 73's you've
> got now.  The other bad news in that situation is that you're probably
> going to have to reinstall from scratch - you could probably manage to
> image from the existing volume to the new one, but it's also almost
> surely going to end up being more effort (if you've never done that sort
> of thing
> > before) than simply re-installing.
> > 
> > Good luck!  Once you do get the disk situation worked out, let us know
> and I (or someone else) can help you through the lvextend+resize2fs, if
> necessary.  I suspect you won't end up needing that, though.
> > 
> > -g
> > 
> > Travers Hogan wrote:
> > > It looks as if you have software raid 1. You cannot change this-you
> must rebuild your system. I would also suggest getting a hardware raid
> controller.
> > > rgds
> > > Trav
> > > 
> > > ________________________________
> > > 
> > > From: rhn-users-bounces at redhat.com on behalf of Sead Dzelil (Student)
> > > Sent: Sun 25/06/2006 03:10
> > > To: rhn-users at redhat.com
> > > Subject: [rhn-users] LVM
> > > 
> > > 
> > > 
> > > I am a system administrator with no experience with lvm. I have used
> > > fdisk in the past and I was very comfortable with that. I have a very
> > > important question. I have a Dell PowerEdge 6800 server that came with
> > > two 73GB hard drives in a RAID 1 configuration. The order was placed
> > > wrongly, because we need 100+ GB of storage. I went into the RAID BIOS
> > > and changed it from RAID 1 to RAID 0. Now the RAID BIOS display the
> > > logical volume with the full 146GB of storage.
> > > 
> > > The problem is that in the OS(RedHat Enterprise) nothing has changed.
> > > It still only sees the 73GB of storage. What can I do to get the
> > > system to see the whole 146GB? I need as detail info as possible
> > > because I have never used lvm before. Thank You in advance.
> > > 
> > > Sead
> > > 
> > > _______________________________________________
> > > rhn-users mailing list
> > > rhn-users at redhat.com
> > > https://www.redhat.com/mailman/listinfo/rhn-users
> > > 
> > > 
> > > 
> > > 
> > >
------------------------------------------------------------------------
> > > 
> > > _______________________________________________
> > > rhn-users mailing list
> > > rhn-users at redhat.com
> > > https://www.redhat.com/mailman/listinfo/rhn-users
> > 
> > _______________________________________________
> > rhn-users mailing list
> > rhn-users at redhat.com
> > https://www.redhat.com/mailman/listinfo/rhn-users
> 
> _______________________________________________
> rhn-users mailing list
> rhn-users at redhat.com
> https://www.redhat.com/mailman/listinfo/rhn-users
> 
> -----------------------------------------
> This email transmission and any accompanying attachments may
> contain CSX privileged and confidential information intended only
> for the use of the intended addressee.  Any dissemination,
> distribution, copying or action taken in reliance on the contents
> of this email by anyone other than the intended recipient is
> strictly prohibited.  If you have received this email in error
> please immediately delete it and  notify sender at the above CSX
> email address.  Sender and CSX accept no liability for any damage
> caused directly or indirectly by receipt of this email.
> 
> 
> _______________________________________________
> rhn-users mailing list
> rhn-users at redhat.com
> https://www.redhat.com/mailman/listinfo/rhn-users

_______________________________________________
rhn-users mailing list
rhn-users at redhat.com
https://www.redhat.com/mailman/listinfo/rhn-users




More information about the rhn-users mailing list