[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x4939ongtsh.fsf@segfault.boston.devel.redhat.com>
Date: Thu, 20 Jan 2011 16:16:30 -0500
From: Jeff Moyer <jmoyer@...hat.com>
To: Jan Kara <jack@...e.cz>
Cc: Nick Piggin <npiggin@...il.com>, paulmck@...ux.vnet.ibm.com,
Andrew Morton <akpm@...ux-foundation.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-kernel@...r.kernel.org
Subject: Re: [patch] fs: aio fix rcu lookup
Jan Kara <jack@...e.cz> writes:
> So in the end, I've realized I don't need synchronize_rcu() at all and
> in fact everything is OK even without call_rcu() if I base my fix on top
> of your patch.
>
> Attached is your patch with added comment I proposed and also a patch
> fixing the second race. Better?
[snip]
> From 6d5375d55b5d88e8ceda739052566e033be620c2 Mon Sep 17 00:00:00 2001
> From: Jan Kara <jack@...e.cz>
> Date: Wed, 19 Jan 2011 00:37:48 +0100
> Subject: [PATCH 2/2] fs: Fix race between io_destroy() and io_submit() in AIO
>
> A race can occur when io_submit() races with io_destroy():
>
> CPU1 CPU2
> io_submit()
> do_io_submit()
> ...
> ctx = lookup_ioctx(ctx_id);
> io_destroy()
> Now do_io_submit() holds the last reference to ctx.
> ...
> queue new AIO
> put_ioctx(ctx) - frees ctx with active AIOs
[snip]
> We solve this issue by checking whether ctx is being destroyed
> in AIO submission path after adding new AIO to ctx. Then we
> are guaranteed that either io_destroy() waits for new AIO or
> we see that ctx is being destroyed and bail out.
>
> Signed-off-by: Jan Kara <jack@...e.cz>
> ---
> fs/aio.c | 15 +++++++++++++++
> 1 files changed, 15 insertions(+), 0 deletions(-)
>
> diff --git a/fs/aio.c b/fs/aio.c
> index b4dd668..0244c04 100644
> --- a/fs/aio.c
> +++ b/fs/aio.c
> @@ -1642,6 +1642,21 @@ static int io_submit_one(struct kioctx *ctx, struct iocb __user *user_iocb,
> goto out_put_req;
>
> spin_lock_irq(&ctx->ctx_lock);
> + /*
> + * We could have raced with io_destroy() and are currently holding a
> + * reference to ctx which should be destroyed. We cannot submit IO
> + * since ctx gets freed as soon as io_submit() puts its reference.
> + * The check here is reliable since io_destroy() sets ctx->dead before
> + * waiting for outstanding IO. Thus if we don't see ctx->dead set here,
> + * io_destroy() waits for our IO to finish.
> + * The check is inside ctx->ctx_lock to avoid extra memory barrier
> + * in this fast path...
> + */
> + if (ctx->dead) {
> + spin_unlock_irq(&ctx->ctx_lock);
> + ret = -EINVAL;
> + goto out_put_req;
> + }
> aio_run_iocb(req);
> if (!list_empty(&ctx->run_list)) {
> /* drain the run list */
OK, that's clever. Thanks for looking into this, Jan!
You can put my:
Reviewed-by: Jeff Moyer <jmoyer@...hat.com>
on both patches.
Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists