lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJZ5v0gePrW+hnR4UdGnxuibhcUvcfHSfULFok3tYEsOvD7eLA@mail.gmail.com>
Date:   Mon, 10 Apr 2017 22:59:16 +0200
From:   "Rafael J. Wysocki" <rafael@...nel.org>
To:     Joel Fernandes <joelaf@...gle.com>
Cc:     "Rafael J. Wysocki" <rjw@...ysocki.net>,
        Linux PM <linux-pm@...r.kernel.org>,
        Juri Lelli <juri.lelli@....com>,
        LKML <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
        Viresh Kumar <viresh.kumar@...aro.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Patrick Bellasi <patrick.bellasi@....com>,
        Morten Rasmussen <morten.rasmussen@....com>
Subject: Re: [RFC/RFT][PATCH 2/2] cpufreq: schedutil: Utilization aggregation

On Mon, Apr 10, 2017 at 8:39 AM, Joel Fernandes <joelaf@...gle.com> wrote:
> Hi Rafael,

Hi,

> On Sun, Apr 9, 2017 at 5:11 PM, Rafael J. Wysocki <rjw@...ysocki.net> wrote:
>> From: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
>>

[cut]

>> @@ -154,22 +153,30 @@ static unsigned int get_next_freq(struct
>>         return cpufreq_driver_resolve_freq(policy, freq);
>>  }
>>
>> -static void sugov_get_util(unsigned long *util, unsigned long *max)
>> +static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned int flags)
>>  {
>> +       unsigned long cfs_util, cfs_max;
>>         struct rq *rq = this_rq();
>> -       unsigned long cfs_max;
>>
>> -       cfs_max = arch_scale_cpu_capacity(NULL, smp_processor_id());
>> +       sg_cpu->flags |= flags & SCHED_CPUFREQ_RT_DL;
>> +       if (sg_cpu->flags & SCHED_CPUFREQ_RT_DL)
>> +               return;
>>
>> -       *util = min(rq->cfs.avg.util_avg, cfs_max);
>> -       *max = cfs_max;
>> +       cfs_max = arch_scale_cpu_capacity(NULL, smp_processor_id());
>> +       cfs_util = min(rq->cfs.avg.util_avg, cfs_max);
>> +       if (sg_cpu->util * cfs_max < sg_cpu->max * cfs_util) {
>
> Assuming all CPUs have equal compute capacity, doesn't this mean that
> sg_cpu->util is updated only if cfs_util > sg_cpu->util?

Yes, it does.

> Maybe I missed something, but wouldn't we want sg_cpu->util to be
> reduced as well when cfs_util reduces? Doesn't this condition
> basically discard all updates to sg_cpu->util that could have reduced
> it?
>
>> +               sg_cpu->util = cfs_util;
>> +               sg_cpu->max = cfs_max;
>> +       }
>>  }


Well, that's the idea. :-)

During the discussion at the OSPM-summit we concluded that discarding
all of the utilization changes between the points at which frequency
updates actually happened was not a good idea, so they needed to be
aggregated somehow.

There are a few ways to aggregate them, but the most straightforward
one (and one which actually makes sense) is to take the maximum as the
aggregate value.

Of course, this means that we skew things towards performance here,
but I'm not worried that much. :-)

Thanks,
Rafael

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ