lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180713100257.GE32020@arm.com>
Date:   Fri, 13 Jul 2018 11:02:57 +0100
From:   Will Deacon <will.deacon@....com>
To:     Waiman Long <longman@...hat.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
        Mark Ray <mark.ray@....com>, Joe Mario <jmario@...hat.com>,
        Scott Norton <scott.norton@....com>
Subject: Re: [PATCH] locking/rwsem: Take read lock immediate if empty queue
 with no writer

On Tue, Jul 10, 2018 at 02:31:30PM -0400, Waiman Long wrote:
> It was found that a constant stream of readers might cause the count to
> go negative most of the time after an initial trigger by a writer even
> if no writer was present afterward. As a result, most of the readers
> would have to go through the slowpath reducing their performance.
> 
> To avoid that from happening, an additional check is added to detect
> the special case that the reader in the critical section is the only
> one in the wait queue and no writer is present. When that happens, it
> can just have the lock and return immediately without further action.
> Other incoming readers won't see a waiter is present and be forced
> into the slowpath.
> 
> After the list_empty() calls, the CPU should have the lock cacheline
> anyway, so an additional semaphore count check shouldn't have any
> performance impact.
> 
> Signed-off-by: Waiman Long <longman@...hat.com>
> ---
>  kernel/locking/rwsem-xadd.c | 16 +++++++++++++++-
>  1 file changed, 15 insertions(+), 1 deletion(-)

This looks ok to me, but it would be nice to include some performance
figures in the commit log. Do you have any? Phrases such as "shouldn't have
any performance impact" and "probably generate better code" don't fill me
with good feelings ;)

Will

> diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
> index 3064c50..ef8a5f3 100644
> --- a/kernel/locking/rwsem-xadd.c
> +++ b/kernel/locking/rwsem-xadd.c
> @@ -233,8 +233,22 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
>  	waiter.type = RWSEM_WAITING_FOR_READ;
>  
>  	raw_spin_lock_irq(&sem->wait_lock);
> -	if (list_empty(&sem->wait_list))
> +	if (list_empty(&sem->wait_list)) {
> +		/*
> +		 * In the unlikely event that the task is the only one in
> +		 * the wait queue and a writer isn't present, it can have
> +		 * the lock and return immediately without going through
> +		 * the remaining slowpath code.
> +		 *
> +		 * Count won't be 0, but allowing it will probably generate
> +		 * better code.
> +		 */
> +		if (unlikely(atomic_long_read(&sem->count) >= 0)) {
> +			raw_spin_unlock_irq(&sem->wait_lock);
> +			return sem;
> +		}
>  		adjustment += RWSEM_WAITING_BIAS;
> +	}
>  	list_add_tail(&waiter.list, &sem->wait_list);
>  
>  	/* we're now waiting on the lock, but no longer actively locking */
> -- 
> 1.8.3.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ