[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160520212405.GL21993@codeblueprint.co.uk>
Date: Fri, 20 May 2016 22:24:05 +0100
From: Matt Fleming <matt@...eblueprint.co.uk>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...nel.org, linux-kernel@...r.kernel.org,
Pavan Kondeti <pkondeti@...eaurora.org>,
Ben Segall <bsegall@...gle.com>,
Morten Rasmussen <morten.rasmussen@....com>,
Paul Turner <pjt@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>, byungchul.park@....com,
Andrew Hunter <ahh@...gle.com>,
Mike Galbraith <mgalbraith@...e.de>,
Mel Gorman <mgorman@...hsingularity.net>
Subject: Re: [PATCH 2/3] sched,fair: Fix local starvation
On Tue, 10 May, at 07:43:16PM, Peter Zijlstra wrote:
> Mike reported that the recent commit 3a47d5124a95 ("sched/fair: Fix
> fairness issue on migration") broke interactivity and the signal
> starve test.
>
> The problem is that I assumed ENQUEUE_WAKING was only set when we do a
> cross-cpu wakeup (migration), which isn't true. This means we now
> destroy the vruntime history of tasks and wakeup-preemption suffers.
>
> Cure this by making my assumption true, only call
> sched_class::task_waking() when we do a cross-cpu wakeup. This avoids
> the indirect call in the case we do a local wakeup.
>
> Cc: Pavan Kondeti <pkondeti@...eaurora.org>
> Cc: Ben Segall <bsegall@...gle.com>
> Cc: Matt Fleming <matt@...eblueprint.co.uk>
> Cc: Morten Rasmussen <morten.rasmussen@....com>
> Cc: Paul Turner <pjt@...gle.com>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: byungchul.park@....com
> Cc: Andrew Hunter <ahh@...gle.com>
> Fixes: 3a47d5124a95 ("sched/fair: Fix fairness issue on migration")
> Reported-by: Mike Galbraith <mgalbraith@...e.de>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
> kernel/sched/core.c | 29 +++++++++++++++++++++--------
> kernel/sched/fair.c | 41 ++++++++++++++++++++++++++++++++++++++++-
> 2 files changed, 61 insertions(+), 9 deletions(-)
This patch appears to cause a regression for hackbench -pipe of
between ~8% and ~10% with groups >= NR_CPU.
I haven't probed much yet, but it looks like the vruntime of tasks has
gone nuts.
Powered by blists - more mailing lists