lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170522105522.GG6510@vireshk-i7>
Date:   Mon, 22 May 2017 16:25:22 +0530
From:   Viresh Kumar <viresh.kumar@...aro.org>
To:     Brendan Jackman <brendan.jackman@....com>
Cc:     Rafael Wysocki <rjw@...ysocki.net>, linaro-kernel@...ts.linaro.org,
        linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org,
        Vincent Guittot <vincent.guittot@...aro.org>
Subject: Re: [PATCH] cpufreq: dt: Set default policy->transition_delay_ns

On 22-05-17, 11:45, Brendan Jackman wrote:
> Hi Viresh,
> 
> On Mon, May 22 2017 at 05:10, Viresh Kumar wrote:
> > The rate_limit_us for the schedutil governor is getting set to 500 ms by
> > default for the ARM64 hikey board. And its way too much, even for the
> > default value. Lets set the default transition_delay_ns to something
> > more realistic (10 ms), while the userspace always have a chance to set
> > something it wants.
> 
> Just a thought - do you think we can treat the reported transition
> latency as a proxy for the "cost" of freq transitions?  I.e. assume that
> on platforms with very fast frequency switching it's probably cheap to
> switch frequency and we want schedutil to respond quickly, whereas on
> platforms with big latencies, frequency switches might be expensive and
> we probably want hysteresis.
> 
> If that makes sense then maybe we could use 10 * transition_latency /
> NSEC_PER_USEC, when transition_latency is reported? Otherwise 10ms seems
> sensible to me..

So my platform (hikey) does provide transition-latency as 500 us. But
schedutil multiplies that with LATENCY_MULTIPLIER (1000) and that
makes it 500000 rate_limit_us, which is unacceptable.

@Rafael: Why does the LATENCY_MULTIPLIER has such a high value? I am
not sure I understood completely on why we have this multiplier :(

-- 
viresh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ