[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: oops ext3 in journal_write_metadata_buffer


On Thu, May 31, 2001 at 10:00:21PM +0200, Cedric Ware wrote:

> If it can help, there's still a data point in:
> http://www.redhat.com/mailing-lists/ext3-users/msg00521.html
> (message <20010516162003 A22608 com enst fr>).

That one I can't fathom yet, but I've added extra debugging to my
local tree to try to get to the bottom of it and I'm running
100-process stress tests on a 4-way box at the moment to see if that
bears any fruit.

Florian's assert is very different --- it belongs to a class of
bug-checks which should never hit in practice but which _can_ be
triggered not only by software bugs but also by a corrupt on-disk
format.  Of course, ext3 shouldn't ever result in such corruption, but
any hardware failure might lead to it.

The particular assert failure Florian saw can happen if a block gets
allocated to two files, and that can occur if a bitmap block gets
corrupt such that an already-allocated block gets reallocated to
another file.  If that happens, one process can assume ownership of
the block and start journaling it, while another already has IO
outstanding on it.  The asserts are there because ext3 is paranoid
about the order in which writes occur: it needs to be to ensure that
recovery works perfectly.

I've just found one recovery bug (tracked down by Andrew Morton:
thanks!) which might cause such problems.  It is _more_ likely that it
would cause data corruption of recently written data after a crash,
but it is still possible that it could result in the loss of a bitmap
block replay during recovery, which could quite definitely lead to
Florian's assert failure during a subsequent boot.

I'll push out a new ext3 tomorrow with the fix for the recovery bug.
Since e2fsck shares the same recovery code, it will need to be updated
too. <sigh>  [The problem is actually a missing host byte order
conversion when scanning revoked blocks.]

Hmm.  Actually, I tell a lie, I can imagine a scenario in which the
same bug could lead to Cedric's assert failure. 

Anyway, ext3 is currently ultra-paranoid about data integrity and will
BUG-trap on any such problem.  High on my TODO list is to audit those
error messages and to locate all those which might be triggered by
corrupt data and to relax the response, making them cause an error
condition using the standard ext2 configurable error response (ie.
panic, mount-ro or warn-and-ignore).


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]