[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180523192022.1703-2-hch@lst.de>
Date: Wed, 23 May 2018 21:19:50 +0200
From: Christoph Hellwig <hch@....de>
To: viro@...iv.linux.org.uk
Cc: Avi Kivity <avi@...lladb.com>, linux-aio@...ck.org,
linux-fsdevel@...r.kernel.org, netdev@...r.kernel.org,
linux-api@...r.kernel.org, linux-kernel@...r.kernel.org,
stable@...nel.org
Subject: [PATCH 01/33] fix io_destroy()/aio_complete() race
From: Al Viro <viro@...iv.linux.org.uk>
If io_destroy() gets to cancelling everything that can be cancelled and
gets to kiocb_cancel() calling the function driver has left in ->ki_cancel,
it becomes vulnerable to a race with IO completion. At that point req
is already taken off the list and aio_complete() does *NOT* spin until
we (in free_ioctx_users()) releases ->ctx_lock. As the result, it proceeds
to kiocb_free(), freing req just it gets passed to ->ki_cancel().
Fix is simple - remove from the list after the call of kiocb_cancel(). All
instances of ->ki_cancel() already have to cope with the being called with
iocb still on list - that's what happens in io_cancel(2).
Cc: stable@...nel.org
Fixes: 0460fef2a921 "aio: use cancellation list lazily"
Signed-off-by: Al Viro <viro@...iv.linux.org.uk>
Signed-off-by: Christoph Hellwig <hch@....de>
---
fs/aio.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/fs/aio.c b/fs/aio.c
index 755d3f57bcc8..1c383bb44b2d 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -639,9 +639,8 @@ static void free_ioctx_users(struct percpu_ref *ref)
while (!list_empty(&ctx->active_reqs)) {
req = list_first_entry(&ctx->active_reqs,
struct aio_kiocb, ki_list);
-
- list_del_init(&req->ki_list);
kiocb_cancel(req);
+ list_del_init(&req->ki_list);
}
spin_unlock_irq(&ctx->ctx_lock);
--
2.17.0
Powered by blists - more mailing lists