[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C2A3CD0.70706@kernel.org>
Date: Tue, 29 Jun 2010 20:34:56 +0200
From: Tejun Heo <tj@...nel.org>
To: Arjan van de Ven <arjan@...ux.intel.com>
CC: Frederic Weisbecker <fweisbec@...il.com>,
torvalds@...ux-foundation.org, mingo@...e.hu,
linux-kernel@...r.kernel.org, jeff@...zik.org,
akpm@...ux-foundation.org, rusty@...tcorp.com.au,
cl@...ux-foundation.org, dhowells@...hat.com, oleg@...hat.com,
axboe@...nel.dk, dwalker@...eaurora.org, stefanr@...6.in-berlin.de,
florian@...kler.org, andi@...stfloor.org, mst@...hat.com,
randy.dunlap@...cle.com, Arjan van de Ven <arjan@...radead.org>
Subject: Re: [PATCH 34/35] async: use workqueue for worker pool
Hello,
On 06/29/2010 08:22 PM, Arjan van de Ven wrote:
> I'm not trying to suggest "unbound". I'm trying to suggest "don't
> start bounding until you hit # threads >= # cpus you have some
> clever tricks to deal with bounding things; but lets make sure that
> the simple case of having less work to run in parallel than the
> number of cpus gets dealt with simple and unbound.
Well, the thing is, for most cases, binding to cpus is simply better.
That's the reason why our default workqueue was per-cpu to begin with.
There just are a lot more opportunities for optimization for both
memory access and synchronization overheads.
> You also consolidate the thread pools so that you have one global
> pool, so unlike the current situation where you get O(Nr pools * Nr
> cpus), you only get O(Nr cpus) number of threads... that's not too
> burdensome imo. If you want to go below that then I think you're
> going too far in reducing the number of threads in your
> pool. Really.
I lost you in the above paragraph, but I think it would be better to
keep kthread pools separate. It behaves much better regarding memory
access locality (work issuer and worker are on the same cpu and stack
and other memory used by worker are likely to be already hot). Also,
we don't do it yet, but when creating kthreads we can allocate the
stack considering NUMA too.
> so... back to my question; will those two tasks run in parallel or
> sequential ?
If they are scheduled on the same cpu, they won't. If that's
something actually necessary, let's implement it. I have no problem
with that. cmwq already can serve as simple execution context
provider without concurrency control and pumping contexts to async
isn't hard at all. I just wanna know whether it's something which is
actually useful. So, where would that be useful?
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists