[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

RE: audit 0.6 release

> -----Original Message-----
> From: linux-audit-bounces redhat com 
> [mailto:linux-audit-bounces redhat com] On Behalf Of 
> Valdis Kletnieks vt edu
> Sent: Thursday, January 06, 2005 4:17 PM
> To: Linux Audit Discussion
> Subject: Re: audit 0.6 release 
> logrotate doesn't do a very good job of handling "roll to 
> next file when this one is 40M in size", because the cron job 
> is probably not running at the time that the log gets to 40M. 
>  The semantics of "rotate at 2AM if it's over 40M then" are 
> quite different from "rotate at current clocktime 11:37AM if 
> we hit 40M then...".
> Also, in a priv-separated environment, only the "security 
> officer" role should be allowed to remove an audit file 
> (while logrotate's "rotate" command will rm the oldest one 
> if/when needed).  So you probably need to use *two* logrotate 

Instead of the logrotate methodology, how about letting auditd do it.

For my purposes I would like to see the audit logs saved as something
like 'audit.log.2004m12hd01h0001s00CST_2004m12d04h1231s42CST' (and g or
bzipped).  So the auditd could save the time stamp of the last log save,
and when full or at the next user desired time, atomically save the
existing log and start a new one without missing a message (then start a
backgound zip job for the saved log).

Tom Browder

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]