[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [linux-lvm] Bad disk?

On 11/11/2010 11:46 AM, Stuart D. Gathman wrote:
On Wed, 10 Nov 2010, Mauricio Tavares wrote:

Tell us exactly what you mean by "put a LVM on it".  Did you run
pvcreate?  vgcreate?  lvcreate? You might find the output of "pvs"
enlightening.  That will tell us what PVs you have created.
And list /dev/mapper so we know what dm-0 is, and include the output of

	Let me put this way, I thought I did. I mean, after creating the
partition, setting it to LVM (8e), then running

pvcreate /dev/sdc1
vgcreate export /dev/sdc1
lvcreate -L 400G --name vms export

I used mkfs.ext4 to create partition (on /dev/mapper/export-vms) and off I
went. Do you think I missed a step?

Great.  Now include output of "lvs"

raub strangepork:~$ sudo lvs export
  /dev/dm-0: read failed after 0 of 4096 at 0: Input/output error
  LV   VG     Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  vms  export -wi-a- 400.00g
raub strangepork:~$

BTW, if you really suspect a disk error, test for it directly.
E.g., you can run

# dd if=/dev/sdc1 of=/dev/null bs=256k

to read through the partition or

# smartctl -t long /dev/sdc

To initiate a long self test of the disk (need smartmontools installed).

A brand new disk that flunks self test is indeed defective.

However, for real physical I/O errors, there would be errors logged
in /var/log/messages referencing sdc (as opposed to dm-0), so I still
think it is a logical error.

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]