[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

looking for some advice



Hi list,

I have a 750 gig SATA drive with a single ext3 partition on it. I just recently (today) formatted it and began putting data on it, but got this message after a while of writing data


EXT3 FS on sda1, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
APIC error on CPU1: 00(40)
APIC error on CPU1: 40(40)
APIC error on CPU1: 40(40)
APIC error on CPU1: 40(40)
APIC error on CPU1: 40(40)
attempt to access beyond end of device
sda1: rw=0, want=7255425000, limit=1465144002
EXT3-fs error (device sda1): ext3_free_blocks: Freeing blocks not in datazone - block = 906928124, count = 1
Aborting journal on device sda1.
ext3_abort called.
EXT3-fs error (device sda1): ext3_journal_start_sb: Detected aborted journal
Remounting filesystem read-only
EXT3-fs error (device sda1) in ext3_reserve_inode_write: Journal has aborted
EXT3-fs error (device sda1) in ext3_reserve_inode_write: Journal has aborted
EXT3-fs error (device sda1) in ext3_orphan_del: Journal has aborted


I had something similar to this happen with a completely different machine sometime ago and it turned out to be bad RAM I think. I've run badblocks on the drive and it's come back clean. This has happened with RHEL 4 and 5, so it's gotta be a hardware problem I think.

Can anyone provide some pointers on where to go?

Every time I unmount the drive, and start fsck, I get about 2 to 3 minutes into the fsck and then hit a kernel panic that stops the machine.

The text up to the panic (from fsck) is


[root babar /]# fsck /dev/sda1
fsck 1.39 (29-May-2006)
e2fsck 1.39 (29-May-2006)
/dev/sda1: recovering journal
/dev/sda1 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Inode 453936 is in use, but has dtime set.  Fix<y>? yes

Inode 453936 has compression flag set on filesystem without compression support. Clear<y>? yes

Inode 453936, i_blocks is 58, should be 0.  Fix<y>? yes

Inodes that were part of a corrupted orphan linked list found.  Fix<y>? yes

Inode 1035544 was part of the orphaned inode list.  FIXED.
Inode 1671448 is in use, but has dtime set.  Fix<y>? yes

Inode 1671448 has imagic flag set.  Clear<y>? yes

Inode 1671448 has compression flag set on filesystem without compression support. Clear<y>? yes

Inode 1671448, i_size is 8029468887176259360, should be 0.  Fix<y>? yes

Inode 1671448, i_blocks is 740766520, should be 0.  Fix<y>? yes

Inode 2524688 is in use, but has dtime set.  Fix<y>? yes

Inode 2524688 has imagic flag set.  Clear<y>? yes

Inode 2524688 has compression flag set on filesystem without compression support. Clear<y>? yes

Inode 2524688, i_size is 4194594118404629538, should be 0.  Fix<y>? yes

Inode 2524688, i_blocks is 1868787572, should be 0.  Fix<y>? yes

Inode 4079615 is in use, but has dtime set.  Fix<y>? yes

Inode 4079615, i_size is 14052004244018609920, should be 0.  Fix<y>? yes

and panic sets in.



I don't know of a good way to capture the actual panic text though. In any event, I'd appreciate any insight you all may have.

Thanks in advance,
Tim


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]