lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ya5cx3EcU5SgV9dP@gmail.com>
Date:   Mon, 6 Dec 2021 10:56:07 -0800
From:   Eric Biggers <ebiggers@...nel.org>
To:     Alexander Viro <viro@...iv.linux.org.uk>,
        Benjamin LaHaise <bcrl@...ck.org>
Cc:     linux-aio@...ck.org, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org, Ramji Jiyani <ramjiyani@...gle.com>,
        Christoph Hellwig <hch@....de>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Oleg Nesterov <oleg@...hat.com>, Jens Axboe <axboe@...nel.dk>,
        stable@...r.kernel.org
Subject: Re: [PATCH 1/2] aio: keep poll requests on waitqueue until completed

On Fri, Dec 03, 2021 at 04:23:00PM -0800, Eric Biggers wrote:
> @@ -1680,20 +1690,24 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
>  	if (mask && !(mask & req->events))
>  		return 0;
>  
> -	list_del_init(&req->wait.entry);
> -
> +	/*
> +	 * Complete the iocb inline if possible.  This requires that two
> +	 * conditions be met:
> +	 *   1. The event mask must have been passed.  If a regular wakeup was
> +	 *	done instead, then mask == 0 and we have to call vfs_poll() to
> +	 *	get the events, so inline completion isn't possible.
> +	 *   2. ctx_lock must not be busy.  We have to use trylock because we
> +	 *      already hold the waitqueue lock, so this inverts the normal
> +	 *      locking order.  Use irqsave/irqrestore because not all
> +	 *      filesystems (e.g. fuse) call this function with IRQs disabled,
> +	 *      yet IRQs have to be disabled before ctx_lock is obtained.
> +	 */
>  	if (mask && spin_trylock_irqsave(&iocb->ki_ctx->ctx_lock, flags)) {
>  		struct kioctx *ctx = iocb->ki_ctx;
>  
> -		/*
> -		 * Try to complete the iocb inline if we can. Use
> -		 * irqsave/irqrestore because not all filesystems (e.g. fuse)
> -		 * call this function with IRQs disabled and because IRQs
> -		 * have to be disabled before ctx_lock is obtained.
> -		 */
> +		list_del_init(&req->wait.entry);
>  		list_del(&iocb->ki_list);
>  		iocb->ki_res.res = mangle_poll(mask);
> -		req->done = true;
>  		if (iocb->ki_eventfd && eventfd_signal_allowed()) {
>  			iocb = NULL;
>  			INIT_WORK(&req->work, aio_poll_put_work);
> @@ -1703,7 +1717,16 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
>  		if (iocb)
>  			iocb_put(iocb);
>  	} else {

I think I missed something here.  Now that the request is left on the waitqueue,
there needs to be a third condition for completing the iocb inline: the
completion work must not have already been scheduled.

- Eric

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ