lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170719061937.GB352@vireshk-i7>
Date:   Wed, 19 Jul 2017 11:49:37 +0530
From:   Viresh Kumar <viresh.kumar@...aro.org>
To:     Joel Fernandes <joelaf@...gle.com>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Juri Lelli <juri.lelli@....com>,
        Patrick Bellasi <patrick.bellasi@....com>,
        Andres Oportus <andresoportus@...gle.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
        Len Brown <lenb@...nel.org>,
        "Rafael J . Wysocki" <rjw@...ysocki.net>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH RFC v5] cpufreq: schedutil: Make iowait boost more energy
 efficient

On 18-07-17, 21:39, Joel Fernandes wrote:
> Not really, to me B will still work because in the case the flag is
> set, we are correctly double boosting in the next cycle.
> 
> Taking an example, with B = flag is set and D = flag is not set
> 
> F = Fmin (minimum)
> 
> iowait flag       B  B    B    D    D    D
> resulting boost   F  2*F  4*F  4*F  2*F  F

What about this ?

iowait flag       B  D    B    D    B    D
resulting boost   F  2*F  F    2*F  F    2*F

Isn't this the worst behavior we may wish for ?

> What will not work is C but as I mentioned in my last email, that
> would cause us to delay the iowait boost halving by at most 1 cycle,
> is that really an issue considering we are starting from min compared
> to max? Note that cases A. and B. are still working.
> 
> Considering the following cases:
> (1) min freq is 800MHz, it takes upto 4 cycles to reach 4GHz where the
> flag is set. At this point I think its likely we will run for many
> more cycles which means keeping the boost active for 1 extra cycle
> isn't that big a deal. Even if run for just 5 cycles with boost, that
> means only the last cycle will suffer from C not decaying as soon as
> possible. Comparing that to the case where in current code we
> currently run at max from the first cycle, its not that bad.
> 
> (2) we have a transient type of thing, in this case we're probably not
> reaching the full max immediately so even if we delay the decaying,
> its still not as bad as what it is currently.
> 
> I think considering that the code is way cleaner than any other
> approach - its a small price to pay. Also keeping in mind that this
> patch is still an improvement over the current spike, even though as
> you said its still a bit spikey, but its still better right?
> 
> Functionally the code is working and I think is also clean, but if you
> feel that its still confusing, then I'm open to rewriting it.

I am not worried for being boosted for a bit more time, but with the
spikes even when we do not want a freq change.

> > And so in my initial solution I reversed the order in
> > sugov_iowait_boost().
> 
> Yes, but to fix A. you had to divide by 2 in sugov_set_iowait_boost,
> and then multiply by 2 later in sugov_iowait_boost to keep the first
> boost at min. That IMO was confusing so my modified patch did it
> differently.

Yeah, it wasn't great for sure.

I hope the following one will work for everyone ?

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 45fcf21ad685..ceac5f72d8da 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -53,6 +53,7 @@ struct sugov_cpu {
        struct update_util_data update_util;
        struct sugov_policy *sg_policy;
 
+       bool iowait_boost_pending;
        unsigned long iowait_boost;
        unsigned long iowait_boost_max;
        u64 last_update;
@@ -169,7 +170,17 @@ static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
                                   unsigned int flags)
 {
        if (flags & SCHED_CPUFREQ_IOWAIT) {
-               sg_cpu->iowait_boost = sg_cpu->iowait_boost_max;
+               if (sg_cpu->iowait_boost_pending)
+                       return;
+
+               sg_cpu->iowait_boost_pending = true;
+
+               if (sg_cpu->iowait_boost) {
+                       sg_cpu->iowait_boost = min(sg_cpu->iowait_boost << 1,
+                                                  sg_cpu->iowait_boost_max);
+               } else {
+                       sg_cpu->iowait_boost = sg_cpu->sg_policy->policy->min;
+               }
        } else if (sg_cpu->iowait_boost) {
                s64 delta_ns = time - sg_cpu->last_update;
 
@@ -182,17 +193,23 @@ static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
 static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, unsigned long *util,
                               unsigned long *max)
 {
-       unsigned long boost_util = sg_cpu->iowait_boost;
-       unsigned long boost_max = sg_cpu->iowait_boost_max;
+       unsigned long boost_util, boost_max;
 
-       if (!boost_util)
+       if (!sg_cpu->iowait_boost)
                return;
 
+       if (sg_cpu->iowait_boost_pending)
+               sg_cpu->iowait_boost_pending = false;
+       else
+               sg_cpu->iowait_boost >>= 1;
+
+       boost_util = sg_cpu->iowait_boost;
+       boost_max = sg_cpu->iowait_boost_max;
+
        if (*util * boost_max < *max * boost_util) {
                *util = boost_util;
                *max = boost_max;
        }
-       sg_cpu->iowait_boost >>= 1;
 }
 
 #ifdef CONFIG_NO_HZ_COMMON


-- 
viresh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ