lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130321001211.GC31256@htj.dyndns.org>
Date:	Wed, 20 Mar 2013 17:12:11 -0700
From:	Tejun Heo <tj@...nel.org>
To:	Viresh Kumar <viresh.kumar@...aro.org>
Cc:	pjt@...gle.com, paul.mckenney@...aro.org, tglx@...utronix.de,
	suresh.b.siddha@...el.com, venki@...gle.com, mingo@...hat.com,
	peterz@...radead.org, rostedt@...dmis.org,
	linaro-kernel@...ts.linaro.org, robin.randhawa@....com,
	Steve.Bannister@....com, Liviu.Dudau@....com,
	charles.garcia-tobin@....com, Arvind.Chauhan@....com,
	linux-rt-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V3 3/7] workqueue: Add helpers to schedule work on any cpu

Hello, Viresh.

On Mon, Mar 18, 2013 at 08:53:25PM +0530, Viresh Kumar wrote:
> queue_work() queues work on current cpu. This may wake up an idle CPU, which is
> actually not required.
> 
> Some of these works can be processed by any CPU and so we must select a non-idle
> CPU here. The initial idea was to modify implementation of queue_work(), but
> that may end up breaking lots of kernel code that would be a nightmare to debug.
> 
> So, we finalized to adding new workqueue interfaces, for works that don't depend
> on a cpu to execute them.
> 
> This patch adds following new routines:
> - queue_work_on_any_cpu
> - queue_delayed_work_on_any_cpu
> 
> These routines would look for the closest (via scheduling domains) non-idle cpu
> (non-idle from schedulers perspective). If the current cpu is not idle or all
> cpus are idle, work will be scheduled on local cpu.

For some reason, I've been always unsure about this patchset.  I've
been thinking about it past few days and I think I now know why.

I can see that strategy like this being useful for timer, and
impressive power saving numbers BTW - we definitely want that.  While
workqueue shares a lot of attributes with timers, there's one
important difference - scheduler is involved in work item executions.

Work item scheduling for per-cpu work items artificially removes the
choice of CPU from the scheduler with the assumption that such
restrictions would lead to lower overall overhead.  There is another
variant of workqueue - WQ_UNBOUND - when such assumption wouldn't hold
too well and it would be best to give the scheduler full latitude
regarding scheduling work item execution including choice of CPU.

Now, this patchset, kinda sorta adds back CPU selection by scheduler
in an ad-hoc way, right?  If we're gonna do that, why not just make it
unbound workqueues?  Then, the scheduler would have full discretion
over them and we don't need to implement this separate ad-hoc thing on
the side.  It's the roundaboutness that bothers me.

I'm not sure about other workqueues but the block one can be very
performance sensitive on machines with high IOPS and it would
definitely be advisable to keep it CPU-affine unless power consumption
is the overriding concern, so how about introducing another workqueue
flag, say WQ_UNBOUND_FOR_POWER (it's terrible, please come up with
something better), which is WQ_UNBOUND on configurations where this
matters and becomes noop if not?

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ