code of process scheduling

Nifty Hat Mitch mitch48 at sbcglobal.net
Thu Oct 7 08:13:22 UTC 2004


On Tue, Oct 05, 2004 at 12:51:12PM -0700, Nifty Hat Mitch wrote:
> On Sat, Oct 02, 2004 at 08:11:28AM +0530, abhijit kumar wrote:
> > 
> > dear programmers!
> >   i am student pursuing a project on CORE fedora2. i want to change
> > the sceduling algorithms used by the linux kernel and apply other
....

> I see that others have given you pointers to some files.
> I suggest that you work on the current FC3 test version (FC3T2).
> Developers will be more helpfull if you are forward looking.
...
> What I am curious about are the goals of the project.

    Since abhijit and I swapped some off line mail on this school
    project I suspect I should share my thoughts as they are of
    general interest.  

At one level this is a benchmarking project.
Benchmarking  may be the single most important part!

First use Google search and collect some standard benchmark tools.  I
would start with "lmbench".  These small benchmark tools are
instrumented (have clock/ timer functions built in already.  Next
build some benchmarks of your own design.  Hint, use exercises from
previous classes.

Next in sched.c (/usr/src/linux-2.6.8-1.521/kernel/sched.c) play with
the existing scheduler and see what changes. This does require that
you recompile the kernel but does not require more or new code.  You
can still break the system but the scope of change is more bounded.

Look for this code in kernel source.
    /*
     * These are the 'tuning knobs' of the scheduler:
     *
     * Minimum timeslice is 10 msecs, default timeslice is 100 msecs,
     * maximum timeslice is 200 msecs. Timeslices get refilled after
     * they expire.
     */
    #define MIN_TIMESLICE           ( 10 * HZ / 1000)
    #define MAX_TIMESLICE           (200 * HZ / 1000)
    #define ON_RUNQUEUE_WEIGHT       30
    #define CHILD_PENALTY            95
    #define PARENT_PENALTY          100
    #define EXIT_WEIGHT               3
    #define PRIO_BONUS_RATIO         25
    #define MAX_BONUS               (MAX_USER_PRIO * PRIO_BONUS_RATIO / 100)
    #define INTERACTIVE_DELTA         2
    #define MAX_SLEEP_AVG           (AVG_TIMESLICE * MAX_BONUS)
    #define STARVATION_LIMIT        (MAX_SLEEP_AVG)

The point in this step is the point is that a scheduler must balance
various activities.  These kernel knobs control this balance as much
or more than the algorithm.

There are also user space knobs....

As you work through the above you can look at scheduler code
and make other specific changes based on what you discover
in your baseline benchmarking and analysis.

Once the current scheduler has been parametrized you may understand
which benchmarks are scheduler sensitive and why.

Some of the benchmark code you write should be selectively greedy for
various system resources. 

One obvious thing to watch for and measure is application startup
time.  Here is the classic hello world (a.out).  Note that most timing
tools are handy and require no special code.  Of interest the
variability of run times is a result of scheduler actions.

    $ time a.out
    Hello world

    real    0m0.004s
    user    0m0.001s
    sys     0m0.002s
    $ time a.out
    Hello world

    real    0m0.005s
    user    0m0.000s
    sys     0m0.003s

And simple "needs a window stuff" like this are fun...

    $ time xterm -e a.out

    real    0m0.305s
    user    0m0.085s
    sys     0m0.026s

Also for handy but non trivial benchmark stuff look at stuff like
glxgears and x11perf.  There are also httpd and CGI benchmarks....

Do not ignore application profiling (strace, ltrace) and kernel profiling 
as benchmark tools.

Quiz for scheduler "want to be authors":

Given: 
       A two man racing team.
       Race is 100 Km in length.
       The first team member drives the first half at 50Km/hr.

Question 1:  
       How fast must the second driver drive the second
       half of the race for the TEAM to average 100Km/hour?

Question 2:  
       Essay, How the heck does this apply to the Linux scheduler.


-- 
	T o m  M i t c h e l l 
	Me, I would "Rather" Not.




More information about the fedora-list mailing list