lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 4 Mar 2024 12:03:43 -0500
From: Benjamin LaHaise <ben@...munityfibre.ca>
To: Bart Van Assche <bvanassche@....org>
Cc: Edward Adam Davis <eadavis@...com>,
	syzbot+b91eb2ed18f599dd3c31@...kaller.appspotmail.com,
	brauner@...nel.org, jack@...e.cz, linux-aio@...ck.org,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	syzkaller-bugs@...glegroups.com, viro@...iv.linux.org.uk
Subject: Re: [PATCH] fs/aio: fix uaf in sys_io_cancel

On Mon, Mar 04, 2024 at 08:15:15AM -0800, Bart Van Assche wrote:
> On 3/3/24 04:21, Edward Adam Davis wrote:
> >The aio poll work aio_poll_complete_work() need to be synchronized with 
> >syscall
> >io_cancel(). Otherwise, when poll work executes first, syscall may access 
> >the
> >released aio_kiocb object.
> >
> >Fixes: 54cbc058d86b ("fs/aio: Make io_cancel() generate completions again")
> >Reported-and-tested-by: 
> >syzbot+b91eb2ed18f599dd3c31@...kaller.appspotmail.com
> >Signed-off-by: Edward Adam Davis <eadavis@...com>
> >---
> >  fs/aio.c | 5 ++---
> >  1 file changed, 2 insertions(+), 3 deletions(-)
> >
> >diff --git a/fs/aio.c b/fs/aio.c
> >index 28223f511931..0fed22ed9eb8 100644
> >--- a/fs/aio.c
> >+++ b/fs/aio.c
> >@@ -1762,9 +1762,8 @@ static void aio_poll_complete_work(struct 
> >work_struct *work)
> >  	} /* else, POLLFREE has freed the waitqueue, so we must complete */
> >  	list_del_init(&iocb->ki_list);
> >  	iocb->ki_res.res = mangle_poll(mask);
> >-	spin_unlock_irq(&ctx->ctx_lock);
> >-
> >  	iocb_put(iocb);
> >+	spin_unlock_irq(&ctx->ctx_lock);
> >  }
> >  
> >  /* assumes we are called with irqs disabled */
> >@@ -2198,7 +2197,6 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, 
> >struct iocb __user *, iocb,
> >  			break;
> >  		}
> >  	}
> >-	spin_unlock_irq(&ctx->ctx_lock);
> >  
> >  	/*
> >  	 * The result argument is no longer used - the io_event is always
> >@@ -2206,6 +2204,7 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, 
> >struct iocb __user *, iocb,
> >  	 */
> >  	if (ret == 0 && kiocb->rw.ki_flags & IOCB_AIO_RW)
> >  		aio_complete_rw(&kiocb->rw, -EINTR);
> >+	spin_unlock_irq(&ctx->ctx_lock);
> >  
> >  	percpu_ref_put(&ctx->users);

This is just so wrong there aren't even words to describe it.  I
recommending reverting all of Bart's patches since they were not reviewed
by anyone with a sufficient level of familiarity with fs/aio.c to get it
right.

		-ben

> I'm not enthusiast about the above patch because it increases the amount
> of code executed with the ctx_lock held. Wouldn't something like the
> untested patch below be a better solution?
> 
> Thanks,
> 
> Bart.
> 
> 
> diff --git a/fs/aio.c b/fs/aio.c
> index 28223f511931..c6fb10321e48 100644
> --- a/fs/aio.c
> +++ b/fs/aio.c
> @@ -2177,6 +2177,7 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, 
> struct iocb __user *, iocb,
>  	struct kioctx *ctx;
>  	struct aio_kiocb *kiocb;
>  	int ret = -EINVAL;
> +	bool is_cancelled_rw = false;
>  	u32 key;
>  	u64 obj = (u64)(unsigned long)iocb;
> 
> @@ -2193,6 +2194,7 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, 
> struct iocb __user *, iocb,
>  	/* TODO: use a hash or array, this sucks. */
>  	list_for_each_entry(kiocb, &ctx->active_reqs, ki_list) {
>  		if (kiocb->ki_res.obj == obj) {
> +			is_cancelled_rw = kiocb->rw.ki_flags & IOCB_AIO_RW;
>  			ret = kiocb->ki_cancel(&kiocb->rw);
>  			list_del_init(&kiocb->ki_list);
>  			break;
> @@ -2204,7 +2206,7 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, 
> struct iocb __user *, iocb,
>  	 * The result argument is no longer used - the io_event is always
>  	 * delivered via the ring buffer.
>  	 */
> -	if (ret == 0 && kiocb->rw.ki_flags & IOCB_AIO_RW)
> +	if (ret == 0 && is_cancelled_rw)
>  		aio_complete_rw(&kiocb->rw, -EINTR);
> 
>  	percpu_ref_put(&ctx->users);
> 
> 

-- 
"Thought is the essence of where you are now."

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ