lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 7 Jun 2017 09:30:01 +0530
From:   Viresh Kumar <viresh.kumar@...aro.org>
To:     Leonard Crestez <leonard.crestez@....com>
Cc:     linaro-kernel@...ts.linaro.org, linux-pm@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Rafael Wysocki <rjw@...ysocki.net>
Subject: Re: [PATCH] cpufreq: Find transition latency dynamically

On 06-06-17, 18:48, Leonard Crestez wrote:
> I remember checking if transition latency is correct for imx6q-cpufreq
> and it does not appear to be. Maybe because i2c latency of regulator
> adjustments is not counted in?

+ software latency + other stuff based on platforms.

And that's why I am trying to introduce dynamic latencies.

> It seems to me it would be much nicer to have a special flag for this
> instead of overriding CPUFREQ_ETERNAL.

Well, CPUFREQ_ETERNAL is specially there for drivers which don't know their
latencies and so we aren't really overriding it here. We are just trying to do
something sensible for them.

Over that I intend to move as many of the current drivers as possible to specify
CPUFREQ_ETERNAL instead of passing a static transition latency. The field
transition_latency will stay for platforms which don't want kernel to find it
dynamically though.

The only case which is left from above is where a platform sets its latency to
CPUFREQ_ETERNAL and it doesn't want the core to find the latency dynamically. I
would like to learn about the reasons on why do we need that and that is still
doable by setting the latency to something high enough which disallows us to use
ondemand/conservative governors.

> Also, wouldn't it be possible to update this dynamically? Just measure
> the duration every time it happens and do an update like latency =
> (latency * 7 + latest_latency) / 8.

I had that in mind but I want to know of the cases where platforms don't get the
latencies right in the first go (at boot). Because if we update latencies then
there are other things we need to do as well, for example update rate_limit_us
of schedutil governor if it isn't overridden by the user yet. So its not that
straight forward.

Plus, we don't have to get the averages here like you suggested, we are
interested in the max rather (worst case latency).

Thanks for your reviews :)

-- 
viresh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ