[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070222143145.GA3246@elte.hu>
Date: Thu, 22 Feb 2007 15:31:45 +0100
From: Ingo Molnar <mingo@...e.hu>
To: David Miller <davem@...emloft.net>
Cc: johnpol@....mipt.ru, arjan@...radead.org, drepper@...hat.com,
linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
hch@...radead.org, akpm@....com.au, alan@...rguk.ukuu.org.uk,
zach.brown@...cle.com, suparna@...ibm.com, davidel@...ilserver.org,
jens.axboe@...cle.com, tglx@...utronix.de
Subject: Re: [patch 00/13] Syslets, "Threadlets", generic AIO support, v3
* David Miller <davem@...emloft.net> wrote:
> If used for networking one could easily make this new interface create
> an arbitrary number of threads by just opening up that many
> connections to such a server and just sitting there not reading
> anything from any of the client sockets. And this happens
> non-maliciously for slow clients, whether that is due to application
> blockage or the characteristics of the network path.
there are two issues on which i'd like to disagree.
Firstly, i dont think you are fully applying the syslet/threadlet model.
There is no reason why an 'idle' client would have to use up a full
thread! It all depends on how you use syslets/threadlets, and how
(frequently) you queue back requests from cachemiss threads back to the
primary thread. It is only the simplest queueing model where there is
one thread per request that is currently blocked. Syslets/threadlets do
/not/ force request processing to be performed in the async context
forever - the async thread could very much queue it back to the primary
context. (That's in essence what Tux did.) So the same state-machine
techniques can be applied on both the syslet and the threadlet model,
but in much more natural (and thus lower overhead) points: /between/
system calls and not in the middle of them. There are a number of
measures that can be used to keep the number of parallel threads down.
Secondly, even assuming lots of pending requests/async-threads and a
naive queueing model, an open request will eat up resources on the
server no matter what. So if your point is that "+4K of kernel stack
pinned down per open, blocked request makes syslets and threadlets not a
very good idea", then i'd like to disagree with that: while it wont be
zero-cost (4K does cost you 400MB of RAM per 100,000 outstanding
threads), it's often comparable to the other RAM costs that are already
attached to an open connection.
(let me know if i misunderstood your point.)
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists