[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C299B35.6010506@kernel.org>
Date: Tue, 29 Jun 2010 09:05:25 +0200
From: Tejun Heo <tj@...nel.org>
To: Frederic Weisbecker <fweisbec@...il.com>
CC: torvalds@...ux-foundation.org, mingo@...e.hu,
linux-kernel@...r.kernel.org, jeff@...zik.org,
akpm@...ux-foundation.org, rusty@...tcorp.com.au,
cl@...ux-foundation.org, dhowells@...hat.com,
arjan@...ux.intel.com, oleg@...hat.com, axboe@...nel.dk,
dwalker@...eaurora.org, stefanr@...6.in-berlin.de,
florian@...kler.org, andi@...stfloor.org, mst@...hat.com,
randy.dunlap@...cle.com
Subject: Re: [PATCHSET] workqueue: concurrency managed workqueue, take#6
Hello,
On 06/29/2010 01:18 AM, Frederic Weisbecker wrote:
> On Mon, Jun 28, 2010 at 11:03:48PM +0200, Tejun Heo wrote:
>> B. General documentation of Concurrency Managed Workqueue (cmwq)
>> ================================================================
>
>
> It would be nice to get this in Documentation/workqueue-design.txt,
> as the design is complicated enough to deserve this file :)
Yeah, I'm thinking about putting more technical description as the
head comment in workqueue.c and putting overview and information for
workqueue users under Documentation.
>> As multiple execution contexts are available for each wq, deadlocks
>> around execution contexts is much harder to create. The default wq,
>> system_wq, has maximum concurrency level of 256 and unless there is a
>> scenario which can result in a dependency loop involving more than 254
>> workers, it won't deadlock.
>
> Why this arbitrary limitation?
It's basically a safety mechanism to prevent a run away user from
saturating the system with workers. 256 seemed high enough for most
use cases yet low enough not to cause any major system failure. So,
yeah, I pulled that number out of my ass.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists