[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180520073332.GA30522@ZenIV.linux.org.uk>
Date: Sun, 20 May 2018 08:33:39 +0100
From: Al Viro <viro@...IV.linux.org.uk>
To: Christoph Hellwig <hch@....de>
Cc: Avi Kivity <avi@...lladb.com>, linux-aio@...ck.org,
linux-fsdevel@...r.kernel.org, netdev@...r.kernel.org,
linux-api@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 10/32] aio: implement IOCB_CMD_POLL
On Sun, May 20, 2018 at 06:32:25AM +0100, Al Viro wrote:
> > + spin_lock_irqsave(&ctx->ctx_lock, flags);
> > + list_add_tail(&aiocb->ki_list, &ctx->delayed_cancel_reqs);
> > + spin_unlock(&ctx->ctx_lock);
>
> ... and io_cancel(2) comes, finds it and inhume^Wcompletes it, leaving us to...
>
> > + spin_lock(&req->head->lock);
>
> ... get buggered on attempt to dereference a pointer fetched from freed and
> reused object.
FWIW, how painful would it be to pull the following trick:
* insert into wait queue under ->ctx_lock
* have wakeup do schedule_work() with aio_complete() done from that
* have ->ki_cancel() grab queue lock, remove from queue and use
the same schedule_work()
That way you'd get ->ki_cancel() with the same semantics as originally for
everything - "ask politely to finish ASAP", and called in the same locking
environment for everyone - under ->ctx_lock, that is. queue lock nests
inside ->ctx_lock; no magical flags, etc.
The cost is schedule_work() for each async poll-related completion as you
have for fsync. I don't know whether that's too costly or not; it certainly
simplifies the things, but whether it's OK performance-wise...
Comments?
Powered by blists - more mailing lists