[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180802092234.GA13797@lst.de>
Date: Thu, 2 Aug 2018 11:22:34 +0200
From: Christoph Hellwig <hch@....de>
To: Al Viro <viro@...IV.linux.org.uk>
Cc: Christoph Hellwig <hch@....de>, Avi Kivity <avi@...lladb.com>,
linux-aio@...ck.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/4] aio: implement IOCB_CMD_POLL
On Thu, Aug 02, 2018 at 01:21:22AM +0100, Al Viro wrote:
> So what happens if
> * we call aio_poll(), add the sucker to queue and see that we need
> to wait
> * add to ->active_refs just as the wakeup comes
active_reqs I guess..
> * wakeup removes from queue and hits schedule_work()
> * io_cancel() is called, triggering aio_poll_cancel(), which sees that
> we are not from queue and buggers off. We are gone from ->active_refs.
> * aio_poll_complete_work() is called, sees no ->cancelled
> * aio_poll_complete_work() calls vfs_poll(), sees nothing interesting
> and puts us back on the queue.
So let me draw this up, we start with the following:
THREAD 1 THREAD 2
aio_poll
vfs_poll(...)
add_wait_queue()
(no pending mask)
spin_lock_irq(&ctx->ctx_lock);
list_add_tail(..., &ctx->active_reqs) aio_poll_wake
spin_unlock_irq(&ctx->ctx_lock);
(spin_trylock failed)
list_del_init(&req->wait.entry);
schedule_work(&req->work);
Now switching to two new threads:
io_cancel thread worker thread
vfs_poll()
(mask = 0)
aio_poll_cancel
(not on waitqueue, done)
remove from active_reqs
add_wait_queue()
iocb still around
>
> Unless I'm misreading it, cancel will end up with iocb still around and now
> impossible to cancel... What am I missing?
Yes, I think you are right. I'll see how I could handle that case.
One of the easiest options would be to just support aio poll on
file ops that support keyed wakeups, we'd just need to pass that
information up.
Powered by blists - more mailing lists