[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b647ffbd0711160248v444d8f64ic72a17fcb56adc6d@mail.gmail.com>
Date: Fri, 16 Nov 2007 11:48:50 +0100
From: "Dmitry Adamushko" <dmitry.adamushko@...il.com>
To: "Micah Dowty" <micah@...are.com>
Cc: "Ingo Molnar" <mingo@...e.hu>,
"Christoph Lameter" <clameter@....com>,
"Kyle Moffett" <mrmacman_g4@....com>,
"Cyrus Massoumi" <cyrusm@....net>,
"LKML Kernel" <linux-kernel@...r.kernel.org>,
"Andrew Morton" <akpm@...l.org>, "Mike Galbraith" <efault@....de>,
"Paul Menage" <menage@...gle.com>,
"Peter Williams" <pwil3058@...pond.net.au>
Subject: Re: High priority tasks break SMP balancer?
On 16/11/2007, Ingo Molnar <mingo@...e.hu> wrote:
>
> * Micah Dowty <micah@...are.com> wrote:
>
> > > I am a bit at a loss as to how this could relate to the patch. This
> > > looks like a load balance logic issue that causes the load
> > > calculation to go wrong?
> >
> > My best guess is that this has something to do with the timing with
> > which we sample the CPU's instantaneous load when calculating the load
> > averages.. but I still understand only the basics of the scheduler and
> > SMP balancer. All I really know for sure at this point regarding your
> > patch is that git-bisect found it for me.
>
> hm, your code uses timeouts for this, right? The CPU load average that
> is used for SMP load balancing is sampled from the scheduler tick - and
> has been sampled from the scheduler tick for eons. v2.6.23 defaulted to
> a different method but v2.6.24 samples it from the tick again.
yeah... but one of the kernels in question is actually 2.6.23.1, which
does use the 'precise' accounting.
Micah,
could you try to change either :
cat /proc/sys/kernel/sched_stat_granularity
put it to the value equal to a tick on your system
or just remove bit #3 (which is responsible for 8 == 1000) here:
cat /proc/sys/kernel/sched_features
(this one is enabled by default in 2.6.23.1)
anyway, when it comes to calculating rq->cpu_load[], a nice(0) cpu-hog
task (on cpu_0) may generate a similar load (contribute to
rq->cpu_load[]) as e.g. some negatively reniced task (on cpu_1) which
runs only periodically (say, once in a tick for N ms., etc.) [*]
The thing is that the higher a prio of the task, the bigger 'weight'
it has (prio_to_wait[] table in sched.c) ... and roughly, the load it
generates is not only 'proportional' to 'run-time per fixed interval
of time' but also to its 'weight'. That's why the [*] above.
so you may have a situation :
cpu_0 : e.g. a nice(-20) task running periodically every tick and
generating, say ~10% cpu load ;
cpu_1 : 2-3 nice(0) cpu-hog tasks ;
both cpus may be seen with similar rq->load_cpu[]... yeah, one would
argue that one of the cpu hogs could be migrated to cpu_0 and consume
remaining 'time slots' and it would not "disturb" the nice(-20) task
as :
it's able to preempt the lower prio task whenever it want (provided,
fine-grained kernel preemption) and we don't care that much of
trashing of caches here.
btw., without the precise load balancing, there can be situations when
the nice(-20) (or say, a RT periodic task) can be even not seen (i.e.
don't contribute to cpu_load[]) on cpu_0...
we do sampling every tick (sched.c :: update_cpu_load()) and consider
this_rq->ls.load.weight at this particular moment (that is the sum of
'weights' for all runnable tasks on this rq)... and it may well be
that the aforementioned high-priority task is just never (or likely,
rarely) runnable at this particular moment (it runs for short interval
of time in between ticks).
>
> Ingo
>
--
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists