[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtCUbe+qE47byN_1ptc+A1Tvt_E4Y-5rZQmBsUhLiMGAqw@mail.gmail.com>
Date: Wed, 16 May 2018 09:13:20 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Patrick Bellasi <patrick.bellasi@....com>,
Joel Fernandes <joel@...lfernandes.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
"open list:THERMAL" <linux-pm@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
Viresh Kumar <viresh.kumar@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <morten.rasmussen@....com>,
Juri Lelli <juri.lelli@...hat.com>,
Joel Fernandes <joelaf@...gle.com>,
Steve Muckle <smuckle@...gle.com>
Subject: Re: [PATCH 3/3] sched/fair: schedutil: explicit update only when required
On 15 May 2018 at 18:53, Peter Zijlstra <peterz@...radead.org> wrote:
> On Tue, May 15, 2018 at 03:53:43PM +0100, Patrick Bellasi wrote:
>> On 15-May 12:19, Vincent Guittot wrote:
>> > On 14 May 2018 at 18:32, Patrick Bellasi <patrick.bellasi@....com> wrote:
>
>> > Yes se becomes NULL only when you reach root domain
>
> root group; domains are something else again ;-)
yes good point :-)
>
>> Thus, the scheduler knows that we are going to sleep: does is really
>> makes sense to send a notification in this case?
>
> It might; esp. on these very slow changing machines.
>
>> What about adding a new explicit callback at the end of:
>> update_blocked_averages() ?
>>
>> Something like:
>>
>> ---8<---
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index cb77407ba485..6eb0f31c656d 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -7740,6 +7740,9 @@ static void update_blocked_averages(int cpu)
>> if (done)
>> rq->has_blocked_load = 0;
>> #endif
>> +
>> + cpufreq_update_util(rq, SCHED_CPUFREQ_IDLE);
>> +
>> rq_unlock_irqrestore(rq, &rf);
>> }
>> ---8<---
>>
>> Where we can also pass in a new SCHED_CPUFREQ_IDLE flag just to notify
>> schedutil that the CPU is currently IDLE?
>>
>> Could that work?
>
> Simlarly you could add ENQUEUE/DEQUEUE flags I suppose. But let's do all
> that later in separate patches and evaluate the impact separately, OK?
Powered by blists - more mailing lists