[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170705113834.GB2659@e110439-lin>
Date: Wed, 5 Jul 2017 12:38:34 +0100
From: Patrick Bellasi <patrick.bellasi@....com>
To: Viresh Kumar <viresh.kumar@...aro.org>
Cc: linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Juri Lelli <juri.lelli@....com>,
Joel Fernandes <joelaf@...gle.com>,
Andres Oportus <andresoportus@...gle.com>,
Todd Kjos <tkjos@...roid.com>,
Morten Rasmussen <morten.rasmussen@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>
Subject: Re: [PATCH v2 1/6] cpufreq: schedutil: ignore sugov kthreads
On 05-Jul 10:30, Viresh Kumar wrote:
> On 04-07-17, 18:34, Patrick Bellasi wrote:
> > In system where multiple CPUs shares the same frequency domain a small
> > workload on a CPU can still be subject to frequency spikes, generated by
> > the activation of the sugov's kthread.
> >
> > Since the sugov kthread is a special RT task, which goal is just that to
> > activate a frequency transition, it does not make sense for it to bias
> > the schedutil's frequency selection policy.
> >
> > This patch exploits the information related to the current task to silently
> > ignore cpufreq_update_this_cpu() calls, coming from the RT scheduler, while
> > the sugov kthread is running.
> >
> > Signed-off-by: Patrick Bellasi <patrick.bellasi@....com>
> > Cc: Ingo Molnar <mingo@...hat.com>
> > Cc: Peter Zijlstra <peterz@...radead.org>
> > Cc: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> > Cc: Viresh Kumar <viresh.kumar@...aro.org>
> > Cc: linux-kernel@...r.kernel.org
> > Cc: linux-pm@...r.kernel.org
> >
> > ---
> > Changes from v1:
> > - move check before policy spinlock (JuriL)
> > ---
> > kernel/sched/cpufreq_schedutil.c | 8 ++++++++
> > 1 file changed, 8 insertions(+)
> >
> > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> > index c982dd0..eaba6d6 100644
> > --- a/kernel/sched/cpufreq_schedutil.c
> > +++ b/kernel/sched/cpufreq_schedutil.c
> > @@ -218,6 +218,10 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,
> > unsigned int next_f;
> > bool busy;
> >
> > + /* Skip updates generated by sugov kthreads */
> > + if (unlikely(current == sg_policy->thread))
> > + return;
> > +
> > sugov_set_iowait_boost(sg_cpu, time, flags);
> > sg_cpu->last_update = time;
> >
> > @@ -290,6 +294,10 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time,
> > unsigned long util, max;
> > unsigned int next_f;
> >
> > + /* Skip updates generated by sugov kthreads */
> > + if (unlikely(current == sg_policy->thread))
> > + return;
> > +
> > sugov_get_util(&util, &max);
>
> Yes we discussed this last time as well (I looked again at those discussions and
> am still confused a bit), but wanted to clarify one more time.
>
> After the 2nd patch of this series is applied, why will we still have this
> problem? As we concluded it last time, the problem wouldn't happen until the
> time the sugov RT thread is running (Hint: work_in_progress). And once the sugov
> RT thread is gone, one of the other scheduling classes will take over and should
> update the flag pretty quickly.
>
> Are we worried about the time between the sugov RT thread finishes and when the
> CFS or IDLE sched class call the util handler again? If yes, then we will still
> have that problem for any normal RT/DL task. Isn't it ?
Yes, we are worried about that time, without this we can generate
spikes to the max OPP even when only relatively small FAIR tasks are
running.
The same problem is not there for the other "normal RT/DL" tasks, just
because for those tasks this is the expected behavior: we wanna go to
max.
To the contrary the sugov kthread, although being a RT task, is just
functional to the "machinery" to work, it's an actuator. Thus, IMO it
makes no sense from a design standpoint for it to interfere whatsoever
with what the "machinery" is doing.
Finally, the second patch of this series fixes a kind-of symmetrical
issue: while this one avoid going to max OPP, the next one avoid to
stay at max OPP once not more needed.
Cheers Patrick
--
#include <best/regards.h>
Patrick Bellasi
Powered by blists - more mailing lists