[Spacewalk-list] Performance Tuning

Amedeo Salvati amedeo at oscert.net
Fri Apr 18 22:08:11 UTC 2014


Il 18/04/2014 23:43, Matthew Madey ha scritto:
> Looking to get some feedback on performance settings any of you may 
> have done on your Spacewalk server and\or external Oracle Database 
> servers.
>
> I'm looking to patch 1600 systems simultaneously to meet our business 
> requirements.. at our current settings with an 8 CPU \ 32 GB RAM 
> Spacewalk server, I can patch 400 systems simultaneously without much 
> issue. Much of our bottleneck appears to be disk await time on the 
> Oracle database side, which we are addressing, but I'm wondering if 
> there are other kernel\tomcat\apache settings I should be looking at 
> to improve handling of all the XMLRPC traffic coming in to the 
> Spacewalk server.
> 400 systems appears to be our hard limit at the moment, adding just an 
> additional 50 or so leads to some pretty catastrophic effects. I 
> haven't seen any GC or JVM errors in catalina.out thus far, but our 
> memory usage goes through the roof and begin swapping badly. My guess 
> is that the wait time on the database is so bad the Spacewalk server 
> starts backlogging requests until it runs out of memory.
>
> Has anyone else attempted to patch this volume of systems at one time? 
> If so, what changes did you make to the server configuration to 
> accommodate that?
>
>
>
> Below are some of the network tuning parameters we've entered.
>
> net.ipv4.tcp_max_syn_backlog = 8192
> net.ipv4.tcp_syncookies = 1
> net.ipv4.conf.all.rp_filter = 1
> net.ipv4.conf.all.accept_source_route = 0
> net.ipv4.conf.all.accept_redirects = 0
> net.ipv4.conf.all.secure_redirects = 0
> net.ipv4.conf.default.rp_filter = 1
> net.ipv4.conf.default.accept_source_route = 0
> net.ipv4.conf.default.accept_redirects = 0
> net.ipv4.conf.default.secure_redirects = 0
> net.ipv4.icmp_echo_ignore_broadcasts = 1
> net.ipv4.ip_forward = 0
> net.ipv4.conf.all.send_redirects = 0
> net.ipv4.conf.default.send_redirects = 0
> net.ipv4.neigh.default.unres_qlen = 3
>
> #Added OSE Tuning
> ###8x normal for faster network queue draining
> net.core.dev_weight = 512
> #
> ###3x normal for a queue and budget suited to networks greater than 
> 100mbps
> net.core.netdev_budget = 10000
> net.core.netdev_max_backlog = 30000
> net.core.rmem_max = 4194304
> net.core.wmem_max = 1048586
>
> ####min, default (BDP), max (2xBDP) for window scaling (BDP calculated 
> for 10gbps link speed and near zero latency in a LAN)
> net.ipv4.tcp_rmem = 4096 87380  4194304
> net.ipv4.tcp_wmem = 4096 163844194304
>
> ####Affects UDP streams similarly to the above for TCP.
> net.core.rmem_default = 312500
> net.core.wmem_default = 312500
>
> ###Enable Packetization Layer Path MTU Discovery (PLPMTUD)
> net.ipv4.tcp_mtu_probing = 1
>
> net.core.somaxconn = 1536
>
>
>
> _______________________________________________
> Spacewalk-list mailing list
> Spacewalk-list at redhat.com
> https://www.redhat.com/mailman/listinfo/spacewalk-list

hello Matthew,

never tried your workloads... but simple question: have you tried to 
disable https for http?

best regards
a

-- 
Amedeo Salvati

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/spacewalk-list/attachments/20140419/7942ea92/attachment.htm>


More information about the Spacewalk-list mailing list