[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1232978100.4863.87.camel@laptop>
Date: Mon, 26 Jan 2009 14:55:00 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Pawel Dziekonski <dzieko@...il.com>
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>
Subject: Re: CPU scheduler question/problem
On Mon, 2009-01-26 at 14:48 +0100, Pawel Dziekonski wrote:
> 2009/1/23 Peter Zijlstra <peterz@...radead.org>:
>
> > The pipe workload you mentioned has would behave that way because pipes
> > 'assume' a produces/consumer behaviour, and thus are more likely to
> > place both tasks on the same cpu -- but will eventually pull them apart
> > if they want to run concurrently.
> >
> > You might enable SCHED_DEBUG=y and try
> > echo NO_SYNC_WAKEUPS > /debug/sched_features
>
> Hello,
>
> that did the trick. Openssl now gets a whole core exclusively and gives full
> performance.
>
> Regarding quantum chemistry application -- it is also using pipes
> for communication between worker processes. Now this app works OK.
Hmm, how long does a worker run for each received packet? The thing is,
if the data is cache affine for the issuing cpu, and the worker runs
short it often doesn't make sense to run it remote, as the cache
transfer will hurt more.
If it runs longer, the balancer will usually pick it up and move it
around.
/debug/sched_features is mostly a debug tool, its not a supported/stable
tuning interface.
> Where I can read more on tuning sched_features for different workloads?
kernel/sched*.[ch] ;-)
> Also, is there a way get a list of available schedulers and how to switch
> between them?
include/linux/sched.h:
#define SCHED_NORMAL 0
#define SCHED_FIFO 1
#define SCHED_RR 2
#define SCHED_BATCH 3
/* SCHED_ISO: reserved but not implemented yet */
#define SCHED_IDLE 5
and the posix sched_{set,get}schedule() functions.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists