[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <14f85d0c-8303-4710-b8b1-248ce27a6e1f@acm.org>
Date: Mon, 4 Mar 2024 08:15:15 -0800
From: Bart Van Assche <bvanassche@....org>
To: Edward Adam Davis <eadavis@...com>,
syzbot+b91eb2ed18f599dd3c31@...kaller.appspotmail.com
Cc: bcrl@...ck.org, brauner@...nel.org, jack@...e.cz, linux-aio@...ck.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
syzkaller-bugs@...glegroups.com, viro@...iv.linux.org.uk
Subject: Re: [PATCH] fs/aio: fix uaf in sys_io_cancel
On 3/3/24 04:21, Edward Adam Davis wrote:
> The aio poll work aio_poll_complete_work() need to be synchronized with syscall
> io_cancel(). Otherwise, when poll work executes first, syscall may access the
> released aio_kiocb object.
>
> Fixes: 54cbc058d86b ("fs/aio: Make io_cancel() generate completions again")
> Reported-and-tested-by: syzbot+b91eb2ed18f599dd3c31@...kaller.appspotmail.com
> Signed-off-by: Edward Adam Davis <eadavis@...com>
> ---
> fs/aio.c | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/fs/aio.c b/fs/aio.c
> index 28223f511931..0fed22ed9eb8 100644
> --- a/fs/aio.c
> +++ b/fs/aio.c
> @@ -1762,9 +1762,8 @@ static void aio_poll_complete_work(struct work_struct *work)
> } /* else, POLLFREE has freed the waitqueue, so we must complete */
> list_del_init(&iocb->ki_list);
> iocb->ki_res.res = mangle_poll(mask);
> - spin_unlock_irq(&ctx->ctx_lock);
> -
> iocb_put(iocb);
> + spin_unlock_irq(&ctx->ctx_lock);
> }
>
> /* assumes we are called with irqs disabled */
> @@ -2198,7 +2197,6 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, struct iocb __user *, iocb,
> break;
> }
> }
> - spin_unlock_irq(&ctx->ctx_lock);
>
> /*
> * The result argument is no longer used - the io_event is always
> @@ -2206,6 +2204,7 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, struct iocb __user *, iocb,
> */
> if (ret == 0 && kiocb->rw.ki_flags & IOCB_AIO_RW)
> aio_complete_rw(&kiocb->rw, -EINTR);
> + spin_unlock_irq(&ctx->ctx_lock);
>
> percpu_ref_put(&ctx->users);
I'm not enthusiast about the above patch because it increases the amount
of code executed with the ctx_lock held. Wouldn't something like the
untested patch below be a better solution?
Thanks,
Bart.
diff --git a/fs/aio.c b/fs/aio.c
index 28223f511931..c6fb10321e48 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -2177,6 +2177,7 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id,
struct iocb __user *, iocb,
struct kioctx *ctx;
struct aio_kiocb *kiocb;
int ret = -EINVAL;
+ bool is_cancelled_rw = false;
u32 key;
u64 obj = (u64)(unsigned long)iocb;
@@ -2193,6 +2194,7 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id,
struct iocb __user *, iocb,
/* TODO: use a hash or array, this sucks. */
list_for_each_entry(kiocb, &ctx->active_reqs, ki_list) {
if (kiocb->ki_res.obj == obj) {
+ is_cancelled_rw = kiocb->rw.ki_flags & IOCB_AIO_RW;
ret = kiocb->ki_cancel(&kiocb->rw);
list_del_init(&kiocb->ki_list);
break;
@@ -2204,7 +2206,7 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id,
struct iocb __user *, iocb,
* The result argument is no longer used - the io_event is always
* delivered via the ring buffer.
*/
- if (ret == 0 && kiocb->rw.ki_flags & IOCB_AIO_RW)
+ if (ret == 0 && is_cancelled_rw)
aio_complete_rw(&kiocb->rw, -EINTR);
percpu_ref_put(&ctx->users);
Powered by blists - more mailing lists