lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 11 Apr 2017 22:53:06 +0200
From:   "Rafael J. Wysocki" <rafael@...nel.org>
To:     Joel Fernandes <joelaf@...gle.com>
Cc:     "Rafael J. Wysocki" <rafael@...nel.org>,
        "Rafael J. Wysocki" <rjw@...ysocki.net>,
        Linux PM <linux-pm@...r.kernel.org>,
        Juri Lelli <juri.lelli@....com>,
        LKML <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
        Viresh Kumar <viresh.kumar@...aro.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Patrick Bellasi <patrick.bellasi@....com>,
        Morten Rasmussen <morten.rasmussen@....com>
Subject: Re: [RFC/RFT][PATCH 2/2] cpufreq: schedutil: Utilization aggregation

On Tue, Apr 11, 2017 at 3:57 AM, Joel Fernandes <joelaf@...gle.com> wrote:
> On Mon, Apr 10, 2017 at 1:59 PM, Rafael J. Wysocki <rafael@...nel.org> wrote:
> [..]
>>>> +               sg_cpu->util = cfs_util;
>>>> +               sg_cpu->max = cfs_max;
>>>> +       }
>>>>  }
>>
>>
>> Well, that's the idea. :-)
>>
>> During the discussion at the OSPM-summit we concluded that discarding
>> all of the utilization changes between the points at which frequency
>> updates actually happened was not a good idea, so they needed to be
>> aggregated somehow.
>>
>> There are a few ways to aggregate them, but the most straightforward
>> one (and one which actually makes sense) is to take the maximum as the
>> aggregate value.
>>
>> Of course, this means that we skew things towards performance here,
>> but I'm not worried that much. :-)
>
> Does this increase the chance of going to idle at higher frequency?
> Say in the last rate limit window, we have a high request followed by
> a low request. After the window closes, by this algorithm we ignore
> the low request and take the higher valued request, and then enter
> idle. Then, wouldn't we be idling at higher frequency? I guess if you
> enter "cluster-idle" then probably this isn't a big deal (like on the
> ARM64 platforms I am working on). But I wasn't sure how expensive is
> entering C-states at higher frequency on Intel platforms is or if it
> is even a concern. :-D

It isn't a concern at all AFAICS.

Thanks,
Rafael

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ