lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170706055655.GR3532@vireshk-i7>
Date:   Thu, 6 Jul 2017 11:26:55 +0530
From:   Viresh Kumar <viresh.kumar@...aro.org>
To:     Patrick Bellasi <patrick.bellasi@....com>
Cc:     linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        "Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Juri Lelli <juri.lelli@....com>,
        Joel Fernandes <joelaf@...gle.com>,
        Andres Oportus <andresoportus@...gle.com>,
        Todd Kjos <tkjos@...roid.com>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steve Muckle <smuckle.linux@...il.com>,
        Brendan Jackman <brendan.jackman@....com>
Subject: Re: [PATCH v2 3/6] cpufreq: schedutil: ensure max frequency while
 running RT/DL tasks

On 05-07-17, 14:41, Patrick Bellasi wrote:
> On 05-Jul 11:31, Viresh Kumar wrote:

> Just had a fast check but I think something like that can work.
> We had an internal discussion with Brendan (in CC now) which had a
> similar proposal.
> 
> Main counter arguments for me was:
> 1. we wanna to reduce the pollution of scheduling classes code with
>    schedutil specific code, unless strictly necessary

s/schedutil/cpufreq, as the util hooks are getting called for some other stuff
as well.

> 2. we never had a "clear bit" semantics for flags updates
> 
> Thus this proposal seemed to me less of a discontinuity wrt the
> current interface. However, something similar to what you propose
> below should also work.

With the kind of problems we have in hand now, it seems that it would be good
for the governors to know what kind of stuff is queued on the CPU (i.e. the
aggregation of all the flags) and the only sane way of doing that is by clearing
the flag once a class is done with it.

Else we would be required to have code that tries to find the same information
in an indirect way, like what this patch does with the current task.

> Let's collect some more feedback...

Sure.
 
> > diff --git a/include/linux/sched/cpufreq.h b/include/linux/sched/cpufreq.h
> > index d2be2ccbb372..e81a6b5591f5 100644
> > --- a/include/linux/sched/cpufreq.h
> > +++ b/include/linux/sched/cpufreq.h
> > @@ -11,6 +11,10 @@
> >  #define SCHED_CPUFREQ_DL       (1U << 1)
> >  #define SCHED_CPUFREQ_IOWAIT   (1U << 2)
> >  
> > +#define SCHED_CPUFREQ_CLEAR    (1U << 31)
> > +#define SCHED_CPUFREQ_CLEAR_RT (SCHED_CPUFREQ_CLEAR | SCHED_CPUFREQ_RT)
> > +#define SCHED_CPUFREQ_CLEAR_DL (SCHED_CPUFREQ_CLEAR | SCHED_CPUFREQ_DL)
> > +
> >  #define SCHED_CPUFREQ_RT_DL    (SCHED_CPUFREQ_RT | SCHED_CPUFREQ_DL)
> >  
> >  #ifdef CONFIG_CPU_FREQ
> > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> > index 076a2e31951c..f32e15d59d62 100644
> > --- a/kernel/sched/cpufreq_schedutil.c
> > +++ b/kernel/sched/cpufreq_schedutil.c
> > @@ -218,6 +218,9 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,
> >         unsigned int next_f;
> >         bool busy;
> >  
> > +       if (flags & SCHED_CPUFREQ_CLEAR)
> > +               return;
> 
> Here we should still clear the flags, like what we do for the shared
> case... just to keep the internal status consiste with the
> notifications we have got from the scheduling classes.

The sg_cpu->flags field isn't used currently for the single CPU per policy case,
but only for shared policies. But yes, we need to maintain that here now as
well to know what all is queued on a CPU.

-- 
viresh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ