[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070203100351.GA30262@elte.hu>
Date: Sat, 3 Feb 2007 11:03:51 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Matt Mackall <mpm@...enic.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Zach Brown <zach.brown@...cle.com>,
linux-kernel@...r.kernel.org, linux-aio@...ck.org,
Suparna Bhattacharya <suparna@...ibm.com>,
Benjamin LaHaise <bcrl@...ck.org>
Subject: Re: [PATCH 2 of 4] Introduce i386 fibril scheduling
* Matt Mackall <mpm@...enic.com> wrote:
> On Sat, Feb 03, 2007 at 09:23:08AM +0100, Ingo Molnar wrote:
> > The normal and most optimal workflow should be a user-space ring-buffer
> > of these constant-size struct async_syscall entries:
> >
> > struct async_syscall ringbuffer[1024];
> >
> > LIST_HEAD(submitted);
> > LIST_HEAD(pending);
> > LIST_HEAD(completed);
>
> It's wrong to call this a ring buffer as things won't be completed in
> any particular order. [...]
yeah, i realized this when i sent the mail. I wanted to say 'array of
elements' - and it's clear from these list heads that it's fully out of
order. (it should be an array so that the pages of those entries can be
pinned and that completion can be manipulated from any context,
anytime.)
(the queueing i described closely resembles Tux's "Tux syscall request"
handling scheme.)
> [...] So you'll need a fourth list head for which buffer elements are
> free. At which point, you might as well leave it entirely up to the
> application to manage the allocation of async_syscall structs. It may
> know it only needs two, or ten thousand, or five per client...
sure - it should be variable but still the array should be compact, and
should be registered with the kernel. That way security checks can be
done once, the pages can be pinned, accessed anytime, etc.
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists