[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Mount usb drive with mount -a fails after f10 upgrade

On Wed, Jan 7, 2009 at 3:41 AM, Joe W. Byers <ecjbosu aol com> wrote:
> I never set this. I hooked this drive up in 2005 and have been using without
> any issues since until this upgrade.  My server would reboot and notify me
> that this needed a check and I did it or did not.  Never was a problem until
> this upgrade.

There is no value in me trying to figure out how this was set.  I
can't reasonably know. I can say that I've never come across a Fedora
install setting the max mount count for a device to 1.  But I've also
never done the sort of upgrade path your are doing. I'm in no position
to try to confirm how you got in the state you are in.  I can't know
your historical record all I can do is help you diagnose your system
based on the information your system can accurately provide about its
current state. The state is the state, the superblock is the
superblock and

For all we know this was set in 2005 on initial formatting of the
filesystem but the setting was ignored by EL.  I'll have to
requisition the top secret video tapes of your past activity and
review the tapes looking for what created the setting.

> The only interesting thing with EL5 was the usb would not mount during the
> normal mounting sequence but give me an [Failure], then mount later on
> during the boot up.  I never really worried about that because the usb drive
> was getting mounted automatically.  Now it is not and that is was I need.

So you were getting an EL5 failure at boot, you were probably using a
custom script to run mount commands "later on"  Again I'm not in a
position to try to diagnose your EL5 behavior.

The point is simply this.... the information from tune2fs associated
with the max mount count looks odd to me.  What's also problematic in
your tune2fs output is the fact that the Filesystem state is listed as

If I were you I would set the max mount count to 0 or -1 turning off
the max mount count check using tune2fs -c.  I would then  run
fsck.ext2 again and use the -f to force the check,  run tune2fs -l and
see if the Filesystem state changes from "not clean" to "clean"


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]