lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <02825e73-c366-3928-1e0f-eec707627103@redhat.com>
Date:   Thu, 18 Apr 2019 10:35:17 -0400
From:   Waiman Long <longman@...hat.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Ingo Molnar <mingo@...hat.com>, Will Deacon <will.deacon@....com>,
        Thomas Gleixner <tglx@...utronix.de>,
        linux-kernel@...r.kernel.org, x86@...nel.org,
        Davidlohr Bueso <dave@...olabs.net>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        huang ying <huang.ying.caritas@...il.com>
Subject: Re: [PATCH v4 11/16] locking/rwsem: Enable readers spinning on writer

On 04/18/2019 04:57 AM, Peter Zijlstra wrote:
> On Wed, Apr 17, 2019 at 01:34:01PM -0400, Waiman Long wrote:
>> On 04/17/2019 09:56 AM, Peter Zijlstra wrote:
>>> On Sat, Apr 13, 2019 at 01:22:54PM -0400, Waiman Long wrote:
>>>> @@ -549,7 +582,7 @@ static noinline enum owner_state rwsem_spin_on_owner(struct rw_semaphore *sem)
>>>>  	return !owner ? OWNER_NULL : OWNER_READER;
>>>>  }
>>>>  
>>>> -static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
>>>> +static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
>>>>  {
>>>>  	bool taken = false;
>>>>  	bool is_rt_task = rt_task(current);
>>>> @@ -558,9 +591,6 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
>>>>  	preempt_disable();
>>>>  
>>>>  	/* sem->wait_lock should not be held when doing optimistic spinning */
>>>> -	if (!rwsem_can_spin_on_owner(sem))
>>>> -		goto done;
>>>> -
>>>>  	if (!osq_lock(&sem->osq))
>>>>  		goto done;
>>>>  
>>>> @@ -580,10 +610,11 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
>>>>  		/*
>>>>  		 * Try to acquire the lock
>>>>  		 */
>>>> -		if (rwsem_try_write_lock_unqueued(sem)) {
>>>> -			taken = true;
>>>> +		taken = wlock ? rwsem_try_write_lock_unqueued(sem)
>>>> +			      : rwsem_try_read_lock_unqueued(sem);
>>>> +
>>>> +		if (taken)
>>>>  			break;
>>>> -		}
>>>>  
>>>>  		/*
>>>>  		 * An RT task cannot do optimistic spinning if it cannot
>>> Alternatively you pass the trylock function as an argument:
>>>
>>> static bool rwsem_optimistic_spin(struct rw_semaphore *sem,
>>> 				  bool (*trylock)(struct rw_semaphore *sem))
>>> {
>>> 	...
>>> 		if (trylock(sem)) {
>>> 			taken = true;
>>> 			goto unlock;
>>> 		}
>>> 	...
>>> }
>>>
>> With retpoline, an indirect function call will be slower.
> With compiler optimization we can avoid that. Just mark the function as
> __always_inline, there's only two call-sites, each with a different
> trylock.
>
> It might have already done that anyway, and used constant propagation
> on your bool, but the function pointer one is far easier to read.

The bool was extended to an "unsigned long" and trylock functions will
have different argument list in the last patch.

-Longman


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ