[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4AC5F5FB.9050202@s5r6.in-berlin.de>
Date: Fri, 02 Oct 2009 14:45:47 +0200
From: Stefan Richter <stefanr@...6.in-berlin.de>
To: Tejun Heo <tj@...nel.org>
CC: David Howells <dhowells@...hat.com>, jeff@...zik.org,
mingo@...e.hu, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org, jens.axboe@...cle.com,
rusty@...tcorp.com.au, cl@...ux-foundation.org,
arjan@...ux.intel.com
Subject: Re: [RFC PATCHSET] workqueue: implement concurrency managed workqueue
Tejun Heo wrote:
> David Howells wrote:
>> Sounds interesting as a replacement for slow-work. Some thoughts for you:
>>
>> The most important features of slow-work are:
>>
>> (1) Work items are not re-entered whilst they are executing.
>>
>> (2) The slow-work facility keeps references on its work items by asking the
>> client to get and put on the client's refcount.
>>
>> (3) The slow-work facility can create a lot more threads than the number of
>> CPUs on a system, and the system doesn't grind to a halt if they're all
>> taken up with long term I/O (performing several mkdirs for example).
>>
>> I think you have (1) and (3) covered, but I'm unsure about (2).
>
> Given that slow-work isn't being used too extensively yet, I was
> thinking whether that part could be pushed down to the caller. Or, we
> can also wrap work and export an interface which supports the get/put
> reference.
BTW, not only slow-work users need reference counting, many existing
regular workqueue users could use it as well. (Well, I guess I stated
the obvious.) Currently we have local wrappers for that like these ones:
http://lxr.linux.no/#linux+v2.6.31/drivers/firewire/core-card.c#L212
http://lxr.linux.no/#linux+v2.6.31/drivers/firewire/sbp2.c#L838
--
Stefan Richter
-=====-==--= =-=- ---=-
http://arcgraph.de/sr/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists