lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5615B55C.8010804@linaro.org>
Date:	Wed, 7 Oct 2015 17:14:20 -0700
From:	Steve Muckle <steve.muckle@...aro.org>
To:	Juri Lelli <juri.lelli@....com>,
	Peter Zijlstra <peterz@...radead.org>,
	Morten Rasmussen <Morten.Rasmussen@....com>,
	"mturquette@...libre.com" <mturquette@...libre.com>
Cc:	"mingo@...hat.com" <mingo@...hat.com>,
	"vincent.guittot@...aro.org" <vincent.guittot@...aro.org>,
	"daniel.lezcano@...aro.org" <daniel.lezcano@...aro.org>,
	Dietmar Eggemann <Dietmar.Eggemann@....com>,
	"yuyang.du@...el.com" <yuyang.du@...el.com>,
	"rjw@...ysocki.net" <rjw@...ysocki.net>,
	"sgurrappadi@...dia.com" <sgurrappadi@...dia.com>,
	"pang.xunlei@....com.cn" <pang.xunlei@....com.cn>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>
Subject: Re: [RFCv5 PATCH 38/46] sched: scheduler-driven cpu frequency
 selection

On 08/25/2015 03:45 AM, Juri Lelli wrote:
> But, it is true that if the above events happened the other way around
> (we trigger an update after load balancing and a new task arrives), we
> may miss the opportunity to jump to max with the new task. In my mind
> this is probably not a big deal, as we'll have a tick pretty soon that
> will fix things anyway (saving us some complexity in the backend).
> 
> What you think?

I fear that waiting up to a full tick to resolve a shortfall in CPU
bandwidth will cause complaints.

Thinking about how this would be implemented raises a couple questions
for me though.

1. To avoid issuing a frequency change request while one is already in
flight, the current code uses the stated cpufreq driver transition
latency to throttle. Wouldn't it be more accurate to block further
requests until the CPUFREQ_POSTCHANGE notifier has run? In addition to
removing the requirement of supplying a latency value, frequency
transitions may take different amounts of time depending on system state
so a single latency value may often be incorrect.

2. The decision of whether or not to call into the low level cpufreq
driver in the scheduler hot paths currently hinges on whether or not the
low level cpufreq driver will sleep. Even if the cpufreq driver does not
sleep however, the latency to enqueue a frequency change (and complete
it if the low level driver is not asynchronous) may still be high,
making it unsuitable to run in a scheduler hot path. Should the
semantics of the flag be changed to indicate whether a cpufreq driver is
fast enough to run in this context? Sleeping would still of course mean
that it is not.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists