[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C2DAEBB.7090607@kernel.org>
Date: Fri, 02 Jul 2010 11:17:47 +0200
From: Tejun Heo <tj@...nel.org>
To: David Howells <dhowells@...hat.com>,
Arjan van de Ven <arjan@...ux.intel.com>
CC: Frederic Weisbecker <fweisbec@...il.com>,
torvalds@...ux-foundation.org, mingo@...e.hu,
linux-kernel@...r.kernel.org, jeff@...zik.org,
akpm@...ux-foundation.org, rusty@...tcorp.com.au,
cl@...ux-foundation.org, oleg@...hat.com, axboe@...nel.dk,
dwalker@...eaurora.org, stefanr@...6.in-berlin.de,
florian@...kler.org, andi@...stfloor.org, mst@...hat.com,
randy.dunlap@...cle.com, Arjan van de Ven <arjan@...radead.org>
Subject: [PATCHSET] workqueue: implement and use WQ_UNBOUND
Hello, David, Arjan.
These four patches implement unbound workqueues which can be used as
simple execution context provider. I changed async to use it and will
also make fscache use it. This can be used by setting WQ_UNBOUND on
workqueue creation. Works queued to unbound workqueues are implicitly
HIGHPRI and dispatched to unbound workers as soon as resources are
available and the only limitation applied by workqueue code is
@max_active. IOW, for both async and fscache, things will stay about
the same.
WQ_UNBOUND can serve the role of WQ_SINGLE_CPU. WQ_SINGLE_CPU is
dropped and replaced by WQ_UNBOUND.
Arjan, I still think we'll be better off using bound workqueues for
async but let's first convert without causing behavior difference.
Either way isn't gonna result in any noticeable difference anyway. If
you're okay with the conversion, please ack it.
David, this should work for fscache/slow-work the same way too. That
should relieve your concern, right? Oh, and Frederic suggested that
we would be better off with something based on tracing API and I
agree, so the debugfs thing is currently dropped from the tree. What
do you think?
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists