lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 15 Feb 2012 14:38:05 +0100
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Saravana Kannan <skannan@...eaurora.org>
Cc:	Ingo Molnar <mingo@...e.hu>, linaro-kernel@...ts.linaro.org,
	Russell King <linux@....linux.org.uk>,
	Nicolas Pitre <nico@...xnic.net>,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>,
	Oleg Nesterov <oleg@...hat.com>, cpufreq@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	Anton Vorontsov <anton.vorontsov@...aro.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Mike Chan <mike@...roid.com>, Dave Jones <davej@...hat.com>,
	Todd Poynor <toddpoynor@...gle.com>, kernel-team@...roid.com,
	linux-arm-kernel@...ts.infradead.org,
	Arjan Van De Ven <arjan@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH RFC 0/4] Scheduler idle notifiers and users

On Tue, 2012-02-14 at 15:20 -0800, Saravana Kannan wrote:
> On 02/11/2012 06:45 AM, Ingo Molnar wrote:
> >
> > * Saravana Kannan<skannan@...eaurora.org>  wrote:
> >
> >> When you say accommodate all hardware, does it mean we will
> >> keep around CPUfreq and allow attempts at improving it? Or we
> >> will completely move to scheduler based CPU freq scaling, but
> >> won't try to force atomicity? Say, may be queue up a
> >> notification to a CPU driver to scale up the frequency as soon
> >> as it can?
> >
> > I don't think we should (or even could) force atomicity - we
> > adapt to whatever the hardware can do.
> 
> May be I misread the emails from Peter and you, but it sounded like the 
> idea being proposed was to directly do a freq change from the scheduler. 
> That would force the freq change API to be atomic (if it can be 
> implemented is another issue). That's what I was referring to when I 
> loosely used the terms "force atomicity".

Right, so we all agree cpufreq wants scheduler notifications because
polling sucks. The result is indeed you get to do cpufreq from atomic
context, because scheduling from the scheduler is 'interesting'.

> > But the design should be directed at systems where frequency
> > changes can be done in a reasonably fast manner. That is what he
> > future is - any change we initiate today takes years to reach
> > actual products/systems.
> 
> As long as the new design doesn't treat archs needing schedulable 
> context to set freq as a second class citizen, I think we would all be 
> happy.

I would really really like to do just that, if only to encourage
hardware people to just do the thing in hardware. Wanting both ultimate
power savings and crappy hardware just doesn't work -- and yes I'm
sticking to PMIC on i2c is shit as is having to manually sync up voltage
and freq changes.

>  Because it's not just a matter of it being old hardware. 
> Sometimes the decision to let the SW do the voltage scaling also comes 
> down to HW cost. Considering Linux runs on such a wide set of archs, I 
> think we shouldn't treat the need for schedulable context for freq 
> setting as "broken" or "not sane".

So you'd rather spend double the money on trying to get software working
on broken ass hardware?

A lot of these lets save 3 transistors, software can fix it up, hardware
feat^Wfailures end up in spending more than the savings on making the
software doing the fixup. I'm sure tglx can share a few stories here.

Now we could probably cludge something, and we might have to, but I'll
hate you guys for it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ