[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161018103412.GT3117@twins.programming.kicks-ass.net>
Date: Tue, 18 Oct 2016 12:34:12 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>,
Joseph Salisbury <joseph.salisbury@...onical.com>,
Ingo Molnar <mingo@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
Mike Galbraith <efault@....de>, omer.akram@...onical.com
Subject: Re: [v4.8-rc1 Regression] sched/fair: Apply more PELT fixes
On Tue, Oct 18, 2016 at 11:45:48AM +0200, Vincent Guittot wrote:
> On 18 October 2016 at 11:07, Peter Zijlstra <peterz@...radead.org> wrote:
> > So aside from funny BIOSes, this should also show up when creating
> > cgroups when you have offlined a few CPUs, which is far more common I'd
> > think.
>
> The problem is also that the load of the tg->se[cpu] that represents
> the tg->cfs_rq[cpu] is initialized to 1024 in:
> alloc_fair_sched_group
> for_each_possible_cpu(i) {
> init_entity_runnable_average(se);
> sa->load_avg = scale_load_down(se->load.weight);
>
> Initializing sa->load_avg to 1024 for a newly created task makes
> sense as we don't know yet what will be its real load but i'm not sure
> that we have to do the same for se that represents a task group. This
> load should be initialized to 0 and it will increase when task will be
> moved/attached into task group
Yes, I think that makes sense, not sure how horrible that is with the
current state of things, but after your propagate patch, that
reinstates the interactivity hack that should work for sure.
Powered by blists - more mailing lists