[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191108122447.GQ5671@hirez.programming.kicks-ass.net>
Date: Fri, 8 Nov 2019 13:24:47 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Quentin Perret <qperret@...gle.com>
Cc: Kirill Tkhai <ktkhai@...tuozzo.com>, linux-kernel@...r.kernel.org,
aaron.lwe@...il.com, valentin.schneider@....com, mingo@...nel.org,
pauld@...hat.com, jdesfossez@...italocean.com,
naravamudan@...italocean.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, juri.lelli@...hat.com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
kernel-team@...roid.com, john.stultz@...aro.org
Subject: Re: NULL pointer dereference in pick_next_task_fair
On Fri, Nov 08, 2019 at 01:00:34PM +0100, Peter Zijlstra wrote:
> > That would remove one call site to newidle_balance() too, which I think
> > is good. Hackbench probably won't like that, though.
>
> Yeah, that fast path really is important. I've got a few patches pending
> there, fixing a few things and that gets me 2% extra on a sched-yield
> benchmark.
That is, the fast path also allows a cpu-cgroup optimization that wins
something in the order of 3% for cgroup workloads.
The cgroup optimization is basically that when we schedule from
fair->fair, we can avoid having to put/set the whole cgroup hierarchy
but only have to update the part that changed.
Couple that with the set_next_buddy() from dequeue_task_fair(), which
results in the next task being more likely to be from the same cgroup,
and you've got a win.
Powered by blists - more mailing lists