[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtA+Cr0WmG02YW_J9g=nrquUGMSma-XH6XW_V2jMYsyTyA@mail.gmail.com>
Date: Wed, 9 Nov 2016 16:23:18 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...nel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Yuyang Du <yuyang.du@...el.com>,
Morten Rasmussen <Morten.Rasmussen@....com>,
Paul Turner <pjt@...gle.com>, Ben Segall <bsegall@...gle.com>,
Wanpeng Li <kernellwp@...il.com>
Subject: Re: [PATCH 4/6 v7] sched: propagate load during synchronous attach/detach
On 9 November 2016 at 16:03, Peter Zijlstra <peterz@...radead.org> wrote:
> On Tue, Nov 08, 2016 at 10:53:45AM +0100, Vincent Guittot wrote:
>> When a task moves from/to a cfs_rq, we set a flag which is then used to
>> propagate the change at parent level (sched_entity and cfs_rq) during
>> next update. If the cfs_rq is throttled, the flag will stay pending until
>> the cfs_rq is unthrottled.
>>
>> For propagating the utilization, we copy the utilization of group cfs_rq to
>> the sched_entity.
>>
>> For propagating the load, we have to take into account the load of the
>> whole task group in order to evaluate the load of the sched_entity.
>> Similarly to what was done before the rewrite of PELT, we add a correction
>> factor in case the task group's load is greater than its share so it will
>> contribute the same load of a task of equal weight.
>>
>> Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
>> ---
>
>
> I did the below on top, that basically moves code about a bit to reduce
> some #ifdef and kills a few comments that I thought were of the:
>
> i++; /* increment by one */
>
> quality.
OK. The changes look fine to me
>
[snip]
>
Powered by blists - more mailing lists