[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: (



At 10:34 PM +0100 3/1/07, Karel Zak wrote:
>On Thu, Mar 01, 2007 at 02:40:08PM -0500, Dave Jones wrote:
>> On Thu, Mar 01, 2007 at 11:50:26AM +0100, Karel Zak wrote:
>>  >
>>  >  I have just pushed out a new 1.4 version of the readahead util.
>>  >  The changes are:
>>  >
>>  >     * move project to hosted.fedoraproject.org
>>  >     * source code maintained by GIT
>>  >     * various cleanups
>>  >     * new build-system based on autotools
>>  >     * --sort / --dont-sort support
>>  >     * add readahead-collector (based on audit system,
>>  >       requires audit-libs[-devel])
>>  >     * improve init scripts (supports full, custom and fast mode)
>>  >     * add /etc/cron.daily/readahead.cron (with readahead --sort)
>>  >
>>  >  Web page:
>>  >
>>  >     https://hosted.fedoraproject.org/projects/readahead
>>  >
>>  >
>>  >  The code is not tested with FC7, because libauparse (from
>>  >  audit-libs-devel) is broken in FC7 now.
>>  >
>>  >  If you want to generate your customized version of readahead lists
>>  >  you need to boot with "init=/sbin/readahead-collector". Also see
>>  >  /etc/readahead.conf and /usr/share/doc/readahead-1.4/README*.
>>
>> Out of curiousity, how much overhead would it add to always run
>> the collector without needing any boot arguments?
>
> I don't have any numbers (yet), but I expect that audit rules for all
> open(), stat(), ... have a negative performance impact for kernel.
>
> (Well, Steve Grubb added to CC:-)
>
> The second problem is that auditd removes all rules during start up.
> It doesn't assume that there is any other tool that use kernel audit
> system :-)   (you need "chkconfig auditd off" now)

Also, if it were to always run:

Readahead-collector allocates memory in big chunks.  It uses lots of memory
-- when I ran it, 39 MB of /var/log/readahead-rac.log (which produced about
.33 MB of /etc/readahead.d/custom.* -- but see bz 230687).  (I note that
readahead-collector will collect without limit, but that readahead will
only use the first 32K entries.)  Thus, while readahead-collect uses too
much memory now to run every time, if it used a better data structure, say
a balanced tree, and parsed the audit data into the tree as the data
arrived, it could use about 2% of what is currently does.

Neither program seems to take account of the memory used by the files that
are read, though readahead can report it. (Possibly readahead-collect
should avoid the largest files, as they probably aren't mostly used and
don't cause so much seeking.)

Readahead-collector runs for 5 minutes, so its output might need pruning if
it ran each boot.  When run manually, one knows to start stuff up and then
wait for readahead to finish.  BTW, the collection loop has a 30 second
timeout that isn't being used.  It might be reasonable to stop collecting
if no event has come in in that time.

If readahead-collect could run automatically, readahead might request it
for the next boot if "too many" files are not found (say, after a firefox
update).
-- 
____________________________________________________________________
TonyN.:'                       <mailto:tonynelson georgeanelson com>
      '                              <http://www.georgeanelson.com/>


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]