[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1501508545.6867.32.camel@gmail.com>
Date: Mon, 31 Jul 2017 15:42:25 +0200
From: Mike Galbraith <umgwanakikbuti@...il.com>
To: Josef Bacik <josef@...icpanda.com>,
Joel Fernandes <joelaf@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Juri Lelli <Juri.Lelli@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Patrick Bellasi <patrick.bellasi@....com>,
Brendan Jackman <brendan.jackman@....com>,
Chris Redpath <Chris.Redpath@....com>,
Michael Wang <wangyun@...ux.vnet.ibm.com>,
Matt Fleming <matt@...eblueprint.co.uk>
Subject: Re: wake_wide mechanism clarification
On Mon, 2017-07-31 at 12:21 +0000, Josef Bacik wrote:
>
> I've been working in this area recently because of a cpu imbalance problem.
> Wake_wide() definitely makes it so we're waking affine way too often, but I
> think messing with wake_waide to solve that problem is the wrong solution. This
> is just a heuristic to see if we should wake affine, the simpler the better. I
> solved the problem of waking affine too often like this
>
> https://marc.info/?l=linux-kernel&m=150003849602535&w=2
Wait a minute, that's not quite fair :) Wake_wide() can't be blamed
for causing too frequent affine wakeups when what it does is filter
some out. While it may not reject aggressively enough for you (why you
bent it up to be very aggressive), seems the problem from your loads
POV is the scheduler generally being too eager to bounce.
I've also played with rate limiting migration per task, but it had
negative effects too: when idle/periodic balance pulls buddies apart,
rate limiting inhibits them quickly finding each other again, making
undoing all that hard load balancer work a throughput win. Sigh.
-Mike
Powered by blists - more mailing lists