[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1472638699.3942.14.camel@suse.de>
Date: Wed, 31 Aug 2016 12:18:19 +0200
From: Mike Galbraith <mgalbraith@...e.de>
To: Peter Zijlstra <peterz@...radead.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Rik van Riel <riel@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>
Subject: Re: [patch v3.18+ regression fix] sched: Further improve spurious
CPU_IDLE active migrations
On Wed, 2016-08-31 at 12:01 +0200, Peter Zijlstra wrote:
> On Tue, Aug 30, 2016 at 07:42:55AM +0200, Mike Galbraith wrote:
> >
> > 43f4d666 partially cured spurious migrations, but when there are
> > completely idle groups on a lightly loaded processor, and there is
> > a buddy pair occupying the busiest group, we will not attempt to
> > migrate due to select_idle_sibling() buddy placement, leaving the
> > busiest queue with one task. We skip balancing, but increment
> > nr_balance_failed until we kick active balancing, and bounce a
> > buddy pair endlessly, demolishing throughput.
>
> Have you ran this patch through other benchmarks? It looks like
> something that might make something else go funny.
No, but it will be going through SUSE's performance test grid.
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -7249,11 +7249,12 @@ static struct sched_group *find_busiest_
> > > > > > > > * This cpu is idle. If the busiest group is not overloaded
> > > > > > > > * and there is no imbalance between this and busiest group
> > > > > > > > * wrt idle cpus, it is balanced. The imbalance becomes
> > -> > > > > > * significant if the diff is greater than 1 otherwise we
> > -> > > > > > * might end up to just move the imbalance on another group
> > +> > > > > > * significant if the diff is greater than 2 otherwise we
> > +> > > > > > * may end up merely moving the imbalance to another group,
> > +> > > > > > * or bouncing a buddy pair needlessly.
> > > > > > > > */
> > > > > > > > if ((busiest->group_type != group_overloaded) &&
> > -> > > > > > > > > > (local->idle_cpus <= (busiest->idle_cpus + 1)))
> > +> > > > > > > > > > (local->idle_cpus <= (busiest->idle_cpus + 2)))
> > > > > > > > > > goto out_balanced;
>
> So 43f4d66637bc ("sched: Improve sysbench performance by fixing spurious
> active migration") 's +1 made sense in that its a tie breaker. If you
> have 3 tasks on 2 groups, one group will have to have 2 tasks, and
> bouncing the one task around just isn't going to help _anything_.
Yeah, but frequently tasks don't come in ones, so, you end up with an
endless tug of war between LB ripping communicating buddies apart, and
select_idle_sibling() pulling them back together.. bouncing cow
syndrome.
> Incrementing that to +2 has the effect that if you have two tasks on two
> groups, 0,2 is a valid distribution. Which I understand is exactly what
> you want for this workload. But if the two tasks are unrelated, 1,1
> really is a better spread.
True. Better ideas welcome.
-Mike
Powered by blists - more mailing lists