[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [RFC] programmatic IDS routing



On Wednesday 19 March 2008 17:31:49 Linda Knippers wrote:
> Steve Grubb wrote:
> > On Wednesday 19 March 2008 15:55:23 Linda Knippers wrote:
> >>> Because this IDS is part of the audit system.
> >>
> >> Is there something that describes what you're building so we can
> >> have the right context to comment on this?
> >
> > I presented this:
> >
> > http://people.redhat.com/sgrubb/audit/summit07_audit_ids.odp
> >
> > at last year's Red Hat Summit. The idea is roughly the same, but the
> > configuration is slightly different.
>
> To me, what's in the slides looks better.

I think that approach won't scale. I've been thinking about this and other 
surrounding issues for a long time now. Speed is very important in an IDS.


> There are lots of things an IDS might care about, some of which can be
> expressed in audit rules and some which can't.

Right, there are going to be about 15-20 different detections of potential 
problems. The only 3 I'm concerned with right now is escalation of watched 
files, executables, and attempts to make something an executable. The others 
are either solved or not in development right now.


> I think the IDS should make sure that the rules it cares about are there and
> it should react when there's a change to the audit configuration.

That is difficult. There's no way to tell when a new set of rules is about to 
be loaded that may shadow the IDS rules. It is far easier to just let the 
admin express in the rules which one he/she cares about. Easier also means 
that its more likely to work as advertised.


> >> I assumed you were building something that would be a dispatcher plug-in
> >> or something rather than building something new into the core audit
> >> subsystem.
> >
> > It is. Its tightly integrated.
>
> Well, according to the slides its either layered or plugged in, which
> I think is better than tightly integrated.  I don't think audit should
> have special knowledge of the things that might be plugged into it.

It doesn't. The admin does.


> >>>> If an IDS has a dependency on audit and specific audit rules to get
> >>>> the information it needs, it can use the information in its config
> >>>> file to construct the audit rules it needs.
> >>>
> >>> Then you surely have duplicate rules controlled by 2 systems. The first
> >>> rule in the audit.rules file is -D which would delete not only the
> >>> audit event rules for archival purposes, but any IDS placed rules.
> >>> There is not a simple way of deleting the rules placed by auditctl vs
> >>> the ones placed by the IDS. The IDS system would also need to be
> >>> prodded to reload its set of rules again.
> >>
> >> An IDS should be able to be prodded to reload its rules.
> >
> > Sure, but you have the audit system loading CAPP rules with all those
> > watches and then maybe the admin wants any write to shadow to be a high
> > alert since he's the only user and won't be changing his password. We
> > would either need to analyze the rules and make sure they are simplistic
> > enough for the IDS to be guaranteed an event, or just add more rules to
> > make sure we got 'em. In this case, we may windup creating records that
> > the admin did not want on the disk.
>
> If its a high alert, why wouldn't you also want it on the disk?

You do.


> And how would you specify that you do or don't want it written?

With my proposal, everything is written to disk. The difference is the admin 
is aware as he/she adds the rule of what consequences it has. They make the 
choice between fill my disk up and do I have enough events to catch what I 
want realtime notification of.


> Isn't the function of the auditd to "simply dequeue events from netlink
> interface as fast and possible and log them to disk" (slide 17)?

Yep.


> I don't think it ought to be deciding which audit events go to disk
> vs. go to specific dispatcher plug-ins.

It shouldn't.


> > i just see this as progressing into a mess. We have 2 things that have
> > different ideas about what needs to be tracked. Neither understanding why
> > the other is doing something and not happy because either too much data
> > is going to disk or not enough events to trap something important.
>
> I don't see why this has to progress into a mess.

Because the IDS will be insisting on widening the rules to get events and the 
admin will be trying to cut back disk usage. A program can't be smart about 
the rules it needs unless you let the admin better express what they want 
escalated. Perhaps the admin only cares about users >= 500 accessing a 
directory when they fail due to EPERM. You would have to add the ability to 
express that to the IDS configuration. 

If you continue down that path, the IDS system will have become auditctl 
except its working in a disjointed way. When audit rules are reloaded, there 
will be a lag between the rule being deleted and being able to get a new rule 
in that give you the events it needs. If not there is a huge amount of code 
that needs to be written to add rules in the audit.rules file.

The difference boils down to this. I can code up the detector this afternoon 
based on my proposal. Doing it in a more complicated way means taking a month 
or two to get it right.


> > By using the key field, its in plain sight and done with purpose. Not
> > enough events to trap something important, widen it in audit.rules and
> > you also know that this will send more to disk. No suprises there.
>
> I'm actually in favor of using the key, just in using it like its used
> today.  All the capp watches have unique keys, and an admin could create
> more/different rules with different keys.

Sure. This proposal doesn't affect CAPP at all.


> What if other plug-ins also want to use that field?

That would be fine by me. I invite that use as it means people are trying to 
do something useful with this subsystem.  :)

-Steve


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]