lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 6 Aug 2018 15:27:05 -0700
From:   Andrew Morton <akpm@...ux-foundation.org>
To:     Christoph Hellwig <hch@....de>
Cc:     viro@...iv.linux.org.uk, Avi Kivity <avi@...lladb.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        linux-aio@...ck.org, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/4] aio: allow direct aio poll comletions for keyed
 wakeups

On Mon,  6 Aug 2018 10:30:58 +0200 Christoph Hellwig <hch@....de> wrote:

> If we get a keyed wakeup for a aio poll waitqueue and wake can acquire the
> ctx_lock without spinning we can just complete the iocb straight from the
> wakeup callback to avoid a context switch.

Why do we try to avoid spinning on the lock?

> --- a/fs/aio.c
> +++ b/fs/aio.c
> @@ -1672,13 +1672,26 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
>  		void *key)
>  {
>  	struct poll_iocb *req = container_of(wait, struct poll_iocb, wait);
> +	struct aio_kiocb *iocb = container_of(req, struct aio_kiocb, poll);
>  	__poll_t mask = key_to_poll(key);
>  
>  	req->woken = true;
>  
>  	/* for instances that support it check for an event match first: */
> -	if (mask && !(mask & req->events))
> -		return 0;
> +	if (mask) {
> +		if (!(mask & req->events))
> +			return 0;
> +
> +		/* try to complete the iocb inline if we can: */

ie, this comment explains 'what" but not "why".

(There's a typo in Subject:, btw)

> +		if (spin_trylock(&iocb->ki_ctx->ctx_lock)) {
> +			list_del(&iocb->ki_list);
> +			spin_unlock(&iocb->ki_ctx->ctx_lock);
> +
> +			list_del_init(&req->wait.entry);
> +			aio_poll_complete(iocb, mask);
> +			return 1;
> +		}
> +	}
>  
>  	list_del_init(&req->wait.entry);
>  	schedule_work(&req->work);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ