[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0702091458180.2786@alien.or.mcafeemobile.com>
Date: Fri, 9 Feb 2007 15:11:53 -0800 (PST)
From: Davide Libenzi <davidel@...ilserver.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
cc: Zach Brown <zach.brown@...cle.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-aio@...ck.org, Suparna Bhattacharya <suparna@...ibm.com>,
Benjamin LaHaise <bcrl@...ck.org>, Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH 0 of 4] Generic AIO by scheduling stacks
On Fri, 9 Feb 2007, Linus Torvalds wrote:
>
> Ok, here's another entry in this discussion.
That's another way to do it. But you end up creating/destroying a new
thread for every request. May be performing just fine.
Another, even simpler way IMO, is to just have a plain per-task kthread
pool, and a queue. An async_submit() drops a request in the queue, and
wakes the requests queue-head where the kthreads are sleeping. One kthread
picks up the request, service it, drops a result in the result queue, and
wakes results queue-head (where async_fetch() are sleeping). Cancellation
is not problem here (by the mean of sending a signal to the service
kthread). Also, no problem with arch-dependent code. This is a 1:1
match of what my userspace implementation does.
Of course, no hot-path optimization are performed here, and you need a few
context switches more than necessary.
Let's have Zach (Ingo support to Zach would be great) play with the
optimized version, and then we can maybe bench the three to see if the
more complex code that the optimized version require, gets a pay-back from
the performance side.
/me thinks it likely will
- Davide
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists