[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081125165613.GI22504@elte.hu>
Date: Tue, 25 Nov 2008 17:56:13 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Avi Kivity <avi@...hat.com>
Cc: suparna@...ibm.com, Zach Brown <zach.brown@...cle.com>,
linux-aio@...ck.org, Jeff Moyer <jmoyer@...hat.com>,
Anthony Liguori <aliguori@...ibm.com>,
linux-kernel@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: kvm aio wishlist
* Avi Kivity <avi@...hat.com> wrote:
> Ingo Molnar wrote:
>>
>>> Perhaps a variant of syslet, that is kernel-only, and does:
>>>
>>> - always allocate a new kernel stack at io_submit() time, but not a
>>> new thread
>>>
>>
>> such a N:M threading design is a loss - sooner or later we arrive to a
>> point where people actually start using it and then we want to
>> load-balance and schedule these entities.
>>
>
> It's only N:M as long as its nonblocking. If it blocks it becomes 1:1
> again. If it doesn't, it's probably faster to do things on the same
> cache as the caller.
>
>> So i'd suggest the kthread based async engine i wrote for syslets. It
>> worked well and for kernel-only entities it schedules super-fast - it
>> can do up to 20 million events per second on a 16-way box i'm testing
>> on. The objections about syslets were not related to the scheduling of
>> it but were mostly about the userspace API/ABI: you dont have to use
>> that.
>
> I'd love to have something :)
>
> I guess any cache and latency considerations could be fixed if
> - we schedule a syslet for the first time when the thread that launched
> it exits to userspace
> - we queue it on the current cpu's runqueue
>
> In that case, for the nonblocking case syslets and fibrils would
> have very similar performance.
yes. Hence given that fibrills have various tradeoffs, we should do
the syslet thread pool. The code is there and it works :)
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists