[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161004201607.GQ16071@codeblueprint.co.uk>
Date: Tue, 4 Oct 2016 21:16:07 +0100
From: Matt Fleming <matt@...eblueprint.co.uk>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Mike Galbraith <umgwanakikbuti@...il.com>,
Yuyang Du <yuyang.du@...el.com>
Subject: Re: [PATCH] sched/fair: Do not decay new task load on first enqueue
On Wed, 28 Sep, at 04:46:06AM, Vincent Guittot wrote:
>
> ok so i'm a bit confused there
> my understand of your explanation above is that now we left a small
> amount of load in runnable_load_avg after the dequeue so another cpu
> will be chosen. But this explanation seems to be the opposite of what
> Matt said in a previous email that:
> "The performance drop comes from the fact that enqueueing/dequeueing a
> task with load 1002 during fork() results in a zero runnable_load_avg,
> which signals to the load balancer that the CPU is idle, so the next
> time we fork() we'll pick the same CPU to enqueue on -- and the cycle
> continues."
Right, we want to avoid the performance drop, which we can do by
leaving a small amount of load in runnable_load_avg. I think Dietmar
and me are saying the same thing.
Powered by blists - more mailing lists