[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0702061658370.19136@alien.or.mcafeemobile.com>
Date: Tue, 6 Feb 2007 17:15:02 -0800 (PST)
From: Davide Libenzi <davidel@...ilserver.org>
To: Joel Becker <Joel.Becker@...cle.com>
cc: Kent Overstreet <kent.overstreet@...il.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Zach Brown <zach.brown@...cle.com>,
Ingo Molnar <mingo@...e.hu>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-aio@...ck.org, Suparna Bhattacharya <suparna@...ibm.com>,
Benjamin LaHaise <bcrl@...ck.org>
Subject: Re: [PATCH 2 of 4] Introduce i386 fibril scheduling
On Tue, 6 Feb 2007, Joel Becker wrote:
> > - Is it more expensive to forcibly have to wait and fetch a result even
> > for in-cache syscalls, or it's faster to walk the submission array?
>
> Not everything is in-cache. Databases will be doing O_DIRECT
> and will expect that 90% of their I/O calls will block. Why should they
> have to iterate this list every time? If this is the API, they *have*
> to. If there's an efficient way to get "just the ones that didn't
> block", then it's not a problem.
If that's what is wanted, then the async_submit() API can detect the
syncronous completion soon, and drop a result inside the result-queue
immediately. It means that an immediately following async_wait() will find
some completions soon. Or:
struct async_submit {
void *cookie;
int sysc_nbr;
int nargs;
long args[ASYNC_MAX_ARGS];
};
struct async_result {
void *cookie;
long result:
};
int async_submit(struct async_submit *a, struct async_result *r, int n);
Where "r" will store the ones that completed syncronously. I mean, there
are really many ways to do this.
I think ATM the core kernel implementation should be the focus, because
IMO we just scratched the surface of the potential problems that something
like this can arise (scheduling, signaling, cleanup, cancel - just to
name a few).
- Davide
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists