lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 5 Feb 2014 04:25:12 -0500
From:	Tejun Heo <tj@...nel.org>
To:	Stanislaw Gruszka <sgruszka@...hat.com>
Cc:	Johannes Berg <johannes@...solutions.net>,
	Zoran Markovic <zoran.markovic@...aro.org>,
	linux-kernel@...r.kernel.org, linux-wireless@...r.kernel.org,
	netdev@...r.kernel.org, Shaibal Dutta <shaibal.dutta@...adcom.com>,
	"John W. Linville" <linville@...driver.com>,
	"David S. Miller" <davem@...emloft.net>
Subject: Re: [RFC PATCH] net: wireless: move regulatory timeout work to power
 efficient workqueue

Hello,

On Wed, Feb 05, 2014 at 10:17:42AM +0100, Stanislaw Gruszka wrote:
> What are selection criteria when choosing between system_wq or
> system_power_efficient_wq on drivers ? IOW if I would be writing
> a new driver which workqueue should I use and when ?

Yeah, it's a bit ad-hoc at the moment.  The original intention was
just marking the ones which can be shown to have noticeable power
impacts which weren't expected to be too many but we may now have an
self-feeding feedback loop growing new usages, likely somewhat
overzealously and be better off making things more generic.

> I think that should be driver independent, at least for most of drivers.
> If system have to run in low power mode, system_power_efficient_wq
> should be chosen automatically by schedule_work(), otherwise when high
> performance is more important schedule_work() should use system_wq.

The problem there is that system_wq has traditionally guaranteed
per-cpu execution.  It can't automatically be switched to unbound
behavior.  The best long term solution would be isolating the users
which depend on per-cpu behavior and mark them specially rather than
the other way around that we're doing now, making per-cpu guarantee
the special case rather than the norm.  That's gonna take a lot of
auditing tho.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists