[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <FF12832A-4BA3-45EF-AC5A-0828A79CE973@oracle.com>
Date: Thu, 4 Jan 2007 16:36:21 -0800
From: Zach Brown <zach.brown@...cle.com>
To: suparna@...ibm.com
Cc: linux-aio@...ck.org, akpm@...l.org, drepper@...hat.com,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
jakub@...hat.com, mingo@...e.hu
Subject: Re: [RFC] Heads up on a series of AIO patchsets
> generic_write_checks() are done in the submission path, not
> repeated during
> retries, so such types of checks are not intended to run in the aio
> thread.
Ah, I see, I was missing the short-cut which avoids re-running parts
of the write path if we're in a retry.
if (!is_sync_kiocb(iocb) && kiocbIsRestarted(iocb)) {
/* nothing to transfer, may just need to sync data */
return ocount;
It's pretty subtle that this has to be placed before the first
significant current reference and that nothing in the path can return
-EIOCBRETRY until after all of the significant current references.
In totally unrelated news, I noticed that current->io_wait is set to
NULL instead of ¤t->__wait after having run the iocb. I wonder
if it shouldn't be saved and restored instead. And maybe update the
comment over is_sync_wait()? Just an observation.
> That is great and I look forward to it :) I am, however assuming that
> whatever implementation you come up will have a different interface
> from current linux aio -- i.e. a next generation aio model, that
> will be
> easily integratable with kevents etc.
Yeah, that's the hope.
> Which takes me back to Ingo's point - lets have the new evolve
> parallely
> with the old, if we can, and not hold up the patches for POSIX AIO to
> start using kernel AIO, or for epoll to integrate with AIO.
Sure, though there are only so many hours in a day :).
> OK, I just took a quick look at your blog and I see that you
> are basically implementing Linus' microthreads scheduling approach -
> one year since we had that discussion.
Yeah. I wanted to see what it would look like.
> Glad to see that you found a way to make it workable ...
Wellll, that remains to be seen. If nothing else we'll at least hav
code to point at when discussing it. If we all agree it's not the
right way and dismiss the notion, fine, that's progress :).
> (I'm guessing that you are copying over the part
> of the stack in use at the time of every switch, is that correct ?
That was my first pass, yeah. It turned the knob a little too far
towards the "invasive but efficient" direction for my taste. I'm now
giving it a try by having full stacks for each blocked op, we'll see
how that goes.
> At what
> point do you do the allocation of the saved stacks ?
I was allocating at block-time to keep memory consumption down, but I
think my fiddling around with it convinced me that isn't workable.
- z
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists