[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141003075012.GF10583@worktop.programming.kicks-ass.net>
Date: Fri, 3 Oct 2014 09:50:12 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Mike Galbraith <umgwanakikbuti@...il.com>
Cc: Rik van Riel <riel@...hat.com>,
Nicolas Pitre <nicolas.pitre@...aro.org>,
Ingo Molnar <mingo@...hat.com>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>, linux-pm@...r.kernel.org,
linux-kernel@...r.kernel.org, linaro-kernel@...ts.linaro.org
Subject: Re: [PATCH RFC] sched,idle: teach select_idle_sibling about idle
states
On Fri, Oct 03, 2014 at 08:23:04AM +0200, Mike Galbraith wrote:
> On Thu, 2014-10-02 at 13:15 -0400, Rik van Riel wrote:
>
> > Subject: sched,idle: teach select_idle_sibling about idle states
> >
> > Change select_idle_sibling to take cpu idle exit latency into
> > account. First preference is to select the cpu with the lowest
> > exit latency from a completely idle sched_group inside the CPU;
> > if that is not available, we pick the CPU with the lowest exit
> > latency in any sched_group.
> >
> > This increases the total search time of select_idle_sibling,
> > we may want to look into propagating load info up the sched_group
> > tree in some way. That information would also be useful to prevent
> > the wake_affine logic from causing a load imbalance between
> > sched_groups.
>
> A generic boo hiss aimed in the general direction of all of this let's
> go look at every possibility on every wakeup stuff. Less is more.
I hear you, can you see actual slowdown with the patch? While the worst
case doesn't change, it does make the average case equal to the worst
case iteration -- where we previously would average out at inspecting
half the CPUs before finding an idle one, we'd now always inspect all of
them in order to compare all idle ones on their properties.
Also, with the latest generation of Haswell Xeons having 18 cores (36
threads) this is one massively painful loop for sure.
I'm just not sure what to do about it.. I suppose we can artificially
split it into smaller groups, but I bet that'll hurt some, but if we can
show it gains more we might still be able to do it. The only real
problem is actual numbers/workloads (isn't it always) :/
One thing I suppose we could try is keeping a 'busy' flag at the
llc domain which is set when all CPUs are busy (we'll clear it from
new_idle) that way we can avoid the entire iteration if we know its
pointless.
Hmm...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists