[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B2F7DD2.2080902@linux.intel.com>
Date: Mon, 21 Dec 2009 14:53:22 +0100
From: Arjan van de Ven <arjan@...ux.intel.com>
To: Tejun Heo <tj@...nel.org>
CC: Jens Axboe <jens.axboe@...cle.com>,
Andi Kleen <andi@...stfloor.org>,
Peter Zijlstra <peterz@...radead.org>,
torvalds@...ux-foundation.org, awalls@...ix.net,
linux-kernel@...r.kernel.org, jeff@...zik.org, mingo@...e.hu,
akpm@...ux-foundation.org, rusty@...tcorp.com.au,
cl@...ux-foundation.org, dhowells@...hat.com, avi@...hat.com,
johannes@...solutions.net
Subject: Re: workqueue thing
On 12/21/2009 14:22, Tejun Heo wrote:
> Hello,
>
> On 12/21/2009 08:11 PM, Arjan van de Ven wrote:
>> I don't mind a good and clean design; and for sure sharing thread
>> pools into one pool is really good. But if I have to choose between
>> a complex "how to deal with deadlocks" algorithm, versus just
>> running some more threads in the pool, I'll pick the later.
>
> The deadlock avoidance algorithm is pretty simple. It creates a new
> worker when everything is blocked. If the attempt to create a new
> worker blocks, it calls in dedicated workers to ensure allocation path
> is not blocked. It's not that complex.
I'm just wondering if even that is overkill; I suspect you can do entirely without the scheduler intrusion;
just make a new thread for each work item, with some hesteresis:
* threads should stay around for a bit before dying (you do that)
* after some minimum nr of threads (say 4 per cpu), you wait, say, 0.1 seconds before deciding it's time
to spawn more threads, to smooth out spikes of very short lived stuff.
wouldn't that be a lot simpler than "ask the scheduler to see if they are all blocked". If they are all
very busy churning cpu (say doing raid6 work, or btrfs checksumming) you still would want more threads
I suspect
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists