[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110120202111.GB19797@quack.suse.cz>
Date: Thu, 20 Jan 2011 21:21:12 +0100
From: Jan Kara <jack@...e.cz>
To: Nick Piggin <npiggin@...il.com>
Cc: Jan Kara <jack@...e.cz>, Jeff Moyer <jmoyer@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-kernel@...r.kernel.org
Subject: Re: [patch] fs: aio fix rcu lookup
On Thu 20-01-11 04:37:55, Nick Piggin wrote:
> On Thu, Jan 20, 2011 at 3:50 AM, Jan Kara <jack@...e.cz> wrote:
> > On Thu 20-01-11 03:03:23, Nick Piggin wrote:
> >> On Thu, Jan 20, 2011 at 12:21 AM, Jan Kara <jack@...e.cz> wrote:
> >> > Well, we are not required to cancel all the outstanding AIO because of the
> >> > API requirement, that's granted. But we must do it because of the way how
> >> > the code is written. Outstanding IO requests reference ioctx but they are
> >> > not counted in ctx->users but in ctx->reqs_active. So the code relies on
> >> > the fact that the reference held by the hash table protects ctx from being
> >> > freed and io_destroy() waits for requests before dropping the last
> >> > reference to ctx. But there's the second race I describe making it possible
> >> > for new IO to be created after io_destroy() has waited for all IO to
> >> > finish...
> >>
> >> Yes there is that race too I agree. I just didn't follow through the code far
> >> enough to see it was a problem -- I thought it was by design.
> >>
> >> I'd like to solve it without synchronize_rcu() though.
> > Ah, OK. I don't find io_destroy() performance critical but I can
>
> Probably not performance critical, but it could be a very
> large slowdown so somebody might complain.
>
> > understand that you need not like synchronize_rcu() there. ;) Then it
> > should be possible to make IO requests count in ctx->users which would
> > solve the race as well. We'd just have to be prepared that request
> > completion might put the last reference to ioctx and free it but that
> > shouldn't be an issue. Do you like that solution better?
>
> I think so, if it can be done without slowing things down
> and adding locks or atomics if possible.
Actually, I found that freeing ioctx upon IO completion isn't
straightforward because freeing ioctx may need to sleep (it is destroying
work queue) and aio_complete() can be called from an interrupt context.
We could offload the sleeping work to the RCU callback (basically we'd have
to offload the whole __put_ioctx() to RCU callback) but I'm not convinced
it's worth it so I rather chose a bit more subtle approach for fixing the
race (see my patch).
Honza
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists