lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 26 Feb 2023 19:22:47 -0500
From:   Waiman Long <longman@...hat.com>
To:     Peter Zijlstra <peterz@...radead.org>, mingo@...hat.com,
        will@...nel.org
Cc:     linux-kernel@...r.kernel.org, boqun.feng@...il.com
Subject: Re: [PATCH 3/6] locking/rwsem: Rework writer wakeup

On 2/26/23 11:51, Peter Zijlstra wrote:
> On Sun, Feb 26, 2023 at 04:04:35PM +0100, Peter Zijlstra wrote:
>> On Thu, Feb 23, 2023 at 01:26:45PM +0100, Peter Zijlstra wrote:
>>> @@ -1072,7 +1067,7 @@ rwsem_down_read_slowpath(struct rw_semap
>>>   	for (;;) {
>>>   		set_current_state(state);
>>>   		if (!smp_load_acquire(&waiter.task)) {
>>> -			/* Matches rwsem_mark_wake()'s smp_store_release(). */
>>> +			/* Matches rwsem_waiter_wake()'s smp_store_release(). */
>>>   			break;
>>>   		}
>>>   		if (signal_pending_state(state, current)) {
>>> @@ -1143,54 +1138,36 @@ rwsem_down_write_slowpath(struct rw_sema
>>>   	} else {
>>>   		atomic_long_or(RWSEM_FLAG_WAITERS, &sem->count);
>> Found it; if we remove the try_write_lock below, then at least this
>> new-waiter path needs to still do a trylock.
>>
>> Let me go test the other patches on top of all this and push out a fresh
>> set if that all still works.
> queue.git locking/core
>
> We'll see what the robots make of it.

 From your new patch 3:

@@ -1151,55 +1154,39 @@ rwsem_down_write_slowpath(struct rw_semaphore 
*sem, int state)
                 }
         } else {
                 atomic_long_or(RWSEM_FLAG_WAITERS, &sem->count);
+               if (rwsem_try_write_lock(sem, &waiter))
+                       waiter.task = NULL;
         }
+       raw_spin_unlock_irq(&sem->wait_lock);

         /* wait until we successfully acquire the lock */
-       set_current_state(state);
         trace_contention_begin(sem, LCB_F_WRITE);

         for (;;) {
-               if (rwsem_try_write_lock(sem, &waiter)) {
-                       /* rwsem_try_write_lock() implies ACQUIRE on 
success */
+               set_current_state(state);
+               if (!smp_load_acquire(&waiter.task)) {
+                       /* Matches rwsem_waiter_wake()'s 
smp_store_release(). */
                         break;
                 }
-

The additional rwsem_try_write_lock() call seems to address the missed 
wakeup problem AFAICT.

I do have some concern that early lock transfer to a lock owner that has 
not been woken up yet may suppress writer lock stealing from optimistic 
spinning causing some performance regression in some cases. Let's see if 
the test robot report anything.

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ