[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1261391728.4314.49.camel@laptop>
Date: Mon, 21 Dec 2009 11:35:28 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Jens Axboe <jens.axboe@...cle.com>
Cc: Arjan van de Ven <arjan@...ux.intel.com>,
Andi Kleen <andi@...stfloor.org>, Tejun Heo <tj@...nel.org>,
torvalds@...ux-foundation.org, awalls@...ix.net,
linux-kernel@...r.kernel.org, jeff@...zik.org, mingo@...e.hu,
akpm@...ux-foundation.org, rusty@...tcorp.com.au,
cl@...ux-foundation.org, dhowells@...hat.com, avi@...hat.com,
johannes@...solutions.net
Subject: Re: workqueue thing
On Mon, 2009-12-21 at 10:17 +0100, Jens Axboe wrote:
> On Fri, Dec 18 2009, Arjan van de Ven wrote:
> > in addition, threads are cheap. Linux has no technical problem with
> > running 100's of kernel threads (if not 1000s); they cost basically a
> > task struct and a stack (2 pages) each and that's about it. making an
> > elaborate-and-thus-fragile design to save a few kernel threads is
> > likely a bad design direction...
>
> One would hope not, since that is by no means outside of what you see on
> boxes today... Thousands. The fact that they are cheap, is not an
> argument against doing it right. Conceptually, I think the concurrency
> managed work queue pool is a much cleaner (and efficient) design.
If your only concern is the number if idle threads, and it reads like
that, then there is a much easier solution for that.
But I tend to agree with Arjan, who cares if there's thousands idle
threads around.
The fact is that this concurrent workqueue stuff really only works with
works that don't consume CPU, and that's simply not the case today,
there are a number of workqueue users which really do burn CPU.
But even then, the corner cases introduced by memory pressure and
reclaim just make the whole thing an utterly fragile mess.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists