[redhat-lspp] NetLabel performance numbers

Paul Moore paul.moore at hp.com
Thu Jul 13 13:05:18 UTC 2006


On Wednesday 12 July 2006 5:12 pm, Valdis.Kletnieks at vt.edu wrote:
> On Wed, 12 Jul 2006 16:45:44 EDT, Paul Moore said:
> >                  (in 10^6 bits/sec)           (rate / sec)
> >   TEST      tcp_stream      udp_stream     tcp_rr       udp_rr
> >  =================================================================
> >   NoPatch    941.52          961.61         10778.58     10901.03
> >   Disable    941.53          961.60         10814.46     11129.77
> >   Unlabel    941.51          961.61         10769.00     10896.26
>
> The fact the first 3 are all within noise of each other is a good sign.
> (I'm using Disable-NoPatch as a rough indication of noise...)

Yes, I thought that was pretty good news too since it demonstrates that the 
NetLabel patch has no measurable impact on networking performance even when 
enabled.  It's nice when things actually work out the way you thought they 
should :)

> >   C_NoCat    932.30          954.04          9904.58     10106.00
>
> Not bad - this measures just our infrastructure.. And it's certainly
> non-zero but probably within the realm of tolerable for sites that need
> CIPSO.
>
> A second pass at benchmarking this should probably note whether the
> slowdown is primarily a CPU-full issue, or an added-latency issue. If it's
> just the TCP window relating to a different bandwidth*RTT product, it has
> different implications for servicing multiple connections.  It's quite
> possible for a 1% increase in CPU use to cost us 5% throughput - but if the
> CPU is still at 80%, that means we can take on another 20 connections and
> each sees the same 5% drop (and yes, I'm glossing over the queuing issues
> of bursty traffic).

I think we can attribute the bulk of the slowdown in the C_NoCat case to the 
extra 12 bytes (no categories, it would be 40 bytes with full categories) of 
the CIPSO IP option ...

  932.30 / 941.52 = 99.02%   (difference in throughput)
              12 / 1500 =   0.80%   (CIPSO percentage of max packet length)
 -------------------------------------
                                     99.82%

... which means once we take into account the inherent limitations of the 
CIPSO protocol there is only a 0.18% slowdown when using CIPSO to transfer 
sensitivity levels.  That seems reasonable to me.

Feel free to poke holes in my logic - I am neither a statistician or 
a "performance guy".

> >   C_FlCat    625.46          935.52          9110.29      9262.92
> >   C_F_LxV    686.46          935.53          9325.37      9484.93
>
> Any idea why the tcp_rr only dropped about 14%, but tcp_stream dropped 30%?
> I'd expect the rate to be more sensitive to it, because the testing is
> per-packet, not per-KB?

Well, keep in mind that a 1500 byte packet isn't that far off from a KB ...

Honestly, I haven't done any real analysis of the numbers yet, I simply ran 
the tests and posted them to the list.  I thought the results (especially the 
enabled but not in use test) were fairly good for this stage so didn't feel 
it too pressing to dig too deep.  While there have been a few people who have 
been very generous with comments I still think there are a bunch of critical 
people who have yet to fully weigh in on the issue so I'm hesitant to spend 
too much time on detailed analysis when I'm not certain the patch is close to 
final.

If people really feel that detailed analysis of this test is important for 
acceptance let me know and I'll see what I can do.

> >   C_F_NoC    328.69          935.53          6258.61      6415.35
>
> I tuned in late - are there any real configurations where a site would
> actually want cipso_cache_enable=0 set?  Or is this an indication that
> the option needs to be nailed to 1?

It was more for my own curiosity rather than anything else, I just thought I 
would throw it in here in case others were curious too.  Basically, I have 
always asserted that a CIPSO label cache would have a huge benefit in terms 
of receive side performance but I never had any numbers to back it up - now I 
do.

Now, let me head-off Steve G's question before he can ask it - "Why do we need 
a cache en/disable sysctl flag?"  Because the cache uses memory which some 
people might not want and it has negligible impact on the code.

-- 
paul moore
linux security @ hp




More information about the redhat-lspp mailing list