[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120720165213.GD32763@google.com>
Date: Fri, 20 Jul 2012 09:52:13 -0700
From: Tejun Heo <tj@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
tglx@...utronix.de, linux-pm@...r.kernel.org
Subject: Re: [PATCHSET] workqueue: reimplement CPU hotplug to keep idle
workers
Hello, Peter.
On Fri, Jul 20, 2012 at 06:39:51PM +0200, Peter Zijlstra wrote:
> On Tue, 2012-07-17 at 10:12 -0700, Tejun Heo wrote:
> > Currently, workqueue destroys all workers for offline CPUs unless
> > there are lingering work items.
>
> _that_ is the root of all ugly in that thing. I still find it utterly
> insane you can create 'per-cpu' workqueues and then violate the per-cpu
> property with hotplug and get your work ran on a different CPU.
Let's talk about this part in the other reply you made.
> It should be a hard error to use queue_work_on() and then run the work
> on a different cpu. Yet somehow this isn't so.
Ooh, yeah, I agree. That's next on the wq to-do list. The problem is
that queue_work() is implemented in terms of queue_work_on(). In most
cases, the local binding serves as locality optimization than anything
else. There are use cases where affinity is required for correctness.
The assumption was that they should flush during CPU_DOWN but it
probably will be much better to require users which need CPU affinity
to always use queue_work_on() - instead of implicit local affinity
from queue_work() - and flush them automatically from wq callback.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists