[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Kernel Timeslice



Steve West wrote:
>> > Steve West wrote:
>> >> I am running Fedora 9 x86 64 bit. What is the kernel timetick per
>> >> thread? How many threads per second does the kernel run?
>> > Probably not quite what you are asking but here goes:
>> > http://kerneltrap.org/node/464
>> >
>> > run for a few seconds:
>> > $ vmstat 1
>> >
>> > look at system|in = interrupts per second.
>> > this is approximately the interupts per second or timer Hz value.
>> >
>> > from the kernel config parameter HZ_1000 etc:
>> > getconf CLK_TCK
>> >
>> > DaveT.
>> Is there ay way to set the ticks without rebuilding the kernel?
>
> Perhaps if you explained what you are trying to achieve people might be
> able to help you get there.
>
> poc
I have an application/service that has 1000 or so threads. Most of these are TCPIP socket accept and connect. I want to be able to run all the threads in a second or so to achieve a reasonable throughput. I would like the kernel to run 1000 threads per second. Right now I think it is set for 100 ticks
per second in f9 x86 64bit.

The ticks matter when the threads are competing for cpu, but it looks
like in your case they'll mostly be waiting for socket calls (during
which the schedular will hand off to another thread anyway), so
increasing the timeslice frequency is probably not going to make a
difference. Hard to know without testing of course.

poc
Yes you are correct under "NORMAL" circumstances 100 ticks per second
would be ok. But if I design for worst case where all threads are running
I need 1000 ticks per second, or response will not be good. I did not want
to build a custom kernel, but it looks like I may have to to achieve the
design goal.

That depends on whether your design goal is to have the ticks at 1000 or to have the system respond properly. That are not the same thing.

Before you start building kernels, run vmstat and look at the context switching and interrupt rates. I would expect them to be over 1000 under load, indicating that something else is limiting response.

Then look at understand the tunable parameters in the /proc/sys/kernel/sched_* area, I found major changes in that area, and traded a small bit of total performance for far more response. I posted many things to the kernel mailing list, but the meaning of the bits in the "features" has changed since 2.6.18 or so, and what I did doesn't work the same way. Lots of room to tune there, though, before going down the road to maintaining your own config.

--
Bill Davidsen <davidsen tmr com>
  "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]