[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

rebooting more often to stop fsck problems and total disk loss



Hi,

I run several hundred servers that are used heavily (webhosting, etc.)
all day long.

Quite often we'll have a server that either needs a really long fsck
(10 hours - 200 gig drive) or an fsck that evntually results in
everything going to lost+found (pretty much a total loss).

Would rebooting these servers monthly (or some other frequency) stop this?

Is it correct to visualize this as small errors compounding over time
thus more frequent reboots would allow quick fsck's to fix the errors
before they become huge?

(OS is redhat 7.3 and el3)

Thanks for any input!


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]