lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 12 Jul 2017 19:02:52 -0700
From:   Joel Fernandes <joelaf@...gle.com>
To:     Viresh Kumar <viresh.kumar@...aro.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Patrick Bellasi <patrick.bellasi@....com>,
        Juri Lelli <juri.lelli@....com>,
        Andres Oportus <andresoportus@...gle.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
        Len Brown <lenb@...nel.org>,
        "Rafael J . Wysocki" <rjw@...ysocki.net>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH RFC v4] cpufreq: schedutil: Make iowait boost more energy efficient

Hi Viresh,

On Tue, Jul 11, 2017 at 10:00 PM, Viresh Kumar <viresh.kumar@...aro.org> wrote:
[..]
>> Another approach than setting min in sugov_set_iowait_boost, is, since
>> we have already retrieved the current util, we can check if flags ==
>> SCHED_CPUFREQ_IOWAIT, then set initial the iowait_boost such that
>> (iowait_boost / iowait_boost_max) is aleast equal to (util / max) or
>> iowait_boost_min, which ever is lower.
>
> So my concerns weren't only about the initial min value, but also that you
> reduce the freq from sugov_set_iowait_boost(). We can discuss what the ideal
> value to start with can be.
>
>> This still will not increase
>> frequency on the first request, but will ensure the next one will
>> benefit.
>
> If there is no non-enqueue path request lands.
>
>> Yes, I've seen that happen in my testing (consecutive iowait).
>
> The CFS scheduler can send a util update request every 1 ms for util updates and
> I am not sure why isn't that happening in your case.
>
> How much is the time between two consecutive IOWAIT requests in your case ?
> Maybe it is too less (Ofcourse it isn't in your control :). But if we take
> Peter's example, then it will surely have a non-enqueue path request between two
> IOWAIT requests.

Hmm, Ok. I can try to do some measurements about consecutive calls
soon and let you know how often it happens. I also noticed its
possible to call twice in the enqueue path itself as well. It probably
affect my patch more because of starting from min than max.

>> I
>> haven't seen the other case where you have IOWAIT followed by
>> non-IOWAIT for a repeated set of IOWAIT requests. Would you more
>> comfortable if we moved sugov_set_iowait_boost() after the
>> sugov_should_update_freq() ?
>
> That may make us ignore all IOWAIT requests that happen between rate_limit_us
> time. And that would be bad IMHO.

True..

>> That way if there are consecutive
>> requests in the same path, then it most likely rate-limiting will
>> prevent such updates. I will also try to collect some stats as you
>> suggested to see if how often if at all this can happen.
>
> Completely untested, but what about something like this ? This should get rid of
> the spikes you were getting.

I actually like your approach better of consuming the flag first, but
computing the boost later when its used. Its also more immune to the
problem you described. Some comments below:

>
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 076a2e31951c..3459f327c94e 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -53,6 +53,7 @@ struct sugov_cpu {
>         struct update_util_data update_util;
>         struct sugov_policy *sg_policy;
>
> +       bool iowait_boost_pending;
>         unsigned long iowait_boost;
>         unsigned long iowait_boost_max;
>         u64 last_update;
> @@ -169,7 +170,12 @@ static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
>                                    unsigned int flags)
>  {
>         if (flags & SCHED_CPUFREQ_IOWAIT) {
> -               sg_cpu->iowait_boost = sg_cpu->iowait_boost_max;
> +               sg_cpu->iowait_boost_pending = true;
> +
> +               if (!sg_cpu->iowait_boost) {
> +                       sg_cpu->iowait_boost = sg_cpu->sg_policy->policy->cur;
> +                       sg_cpu->iowait_boost >>= 1;
> +               }

Hmm, this doesn't look right to me.. why are we decaying in this path?
I think nothing else other than setting the pending flag and the
initial iowait_boost value is needed here.

Also I feel this is more "spike" prone than setting it initially to
min. As Peter was saying, since we apply the boost only if it
increases the frequency and not decreases, starting from min should at
worst just result in ignoring of the boost the first time.

>         } else if (sg_cpu->iowait_boost) {
>                 s64 delta_ns = time - sg_cpu->last_update;
>
> @@ -182,17 +188,26 @@ static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
>  static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, unsigned long *util,
>                                unsigned long *max)
>  {
> -       unsigned long boost_util = sg_cpu->iowait_boost;
> -       unsigned long boost_max = sg_cpu->iowait_boost_max;
> +       unsigned long boost_util, boost_max;
>
> -       if (!boost_util)
> +       if (!sg_cpu->iowait_boost)
>                 return;
>
> +       if (sg_cpu->iowait_boost_pending) {
> +               sg_cpu->iowait_boost_pending = false;
> +               sg_cpu->iowait_boost = min(sg_cpu->iowait_boost << 1,
> +                                          sg_cpu->iowait_boost_max);
> +       } else {
> +               sg_cpu->iowait_boost >>= 1;
> +       }

And then this path will do the decay correctly when the boost is applied.

thanks,

-Joel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ