lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 8 Jul 2015 11:52:48 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Waiman Long <waiman.long@...com>
Cc:	Will Deacon <will.deacon@....com>, Ingo Molnar <mingo@...hat.com>,
	Arnd Bergmann <arnd@...db.de>,
	Thomas Gleixner <tglx@...utronix.de>,
	"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Scott J Norton <scott.norton@...com>,
	Douglas Hatch <doug.hatch@...com>
Subject: Re: [PATCH 2/4] locking/qrwlock: Reduce reader/writer to reader lock
 transfer latency

On Tue, Jul 07, 2015 at 05:29:50PM -0400, Waiman Long wrote:
> On 07/07/2015 02:10 PM, Will Deacon wrote:

> >diff --git a/include/asm-generic/qrwlock.h b/include/asm-generic/qrwlock.h
> >index deb9e8b0eb9e..be8dc5c6fdbd 100644
> >--- a/include/asm-generic/qrwlock.h
> >+++ b/include/asm-generic/qrwlock.h
> >@@ -27,7 +27,6 @@
> >  /*
> >   * Writer states&  reader shift and bias
> >   */
> >-#define        _QW_WAITING     1               /* A writer is waiting     */
> >  #define        _QW_LOCKED      0xff            /* A writer holds the lock */
> >  #define        _QW_WMASK       0xff            /* Writer mask             */
> >  #define        _QR_SHIFT       8               /* Reader count shift      */
> >diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c
> >index 9f644933f6d4..4006aa1fbd0b 100644
> >--- a/kernel/locking/qrwlock.c
> >+++ b/kernel/locking/qrwlock.c
> >@@ -127,28 +127,23 @@ void queued_write_lock_slowpath(struct qrwlock *lock)
> >         }
> >
> >         /*
> >-        * Set the waiting flag to notify readers that a writer is pending,
> >-        * or wait for a previous writer to go away.
> >+        * Wait for a previous writer to go away, then set the locked
> >+        * flag to notify future readers/writers that we are pending.
> >          */
> >         for (;;) {
> >                 struct __qrwlock *l = (struct __qrwlock *)lock;
> >
> >                 if (!READ_ONCE(l->wmode)&&
> >-                  (cmpxchg(&l->wmode, 0, _QW_WAITING) == 0))
> >+                  (cmpxchg(&l->wmode, 0, _QW_LOCKED) == 0))
> >                         break;
> >
> >                 cpu_relax_lowlatency();
> >         }
> >
> >-       /* When no more readers, set the locked flag */
> >-       for (;;) {
> >-               if ((atomic_read(&lock->cnts) == _QW_WAITING)&&
> >-                   (atomic_cmpxchg(&lock->cnts, _QW_WAITING,
> >-                                   _QW_LOCKED) == _QW_WAITING))
> >-                       break;
> >-
> >+       /* Wait for the readers to drain */
> >+       while (smp_load_acquire((u32 *)&lock->cnts)&  ~_QW_WMASK)
> >                 cpu_relax_lowlatency();
> >-       }
> >+
> >  unlock:
> >         arch_spin_unlock(&lock->lock);
> >  }
> 
> That changes the handshaking protocol. In this case, the readers will have
> to decrement its reader count to enable the writer to continue.

It already needs to, no?

> The interrupt context reader code has to be changed.

Agreed.

> This gives preference to writer and reader will be in a disadvantage.

I don't see that, everybody is still ordered by the wait queue / lock.

> I prefer the current setting as you won't know if the writer has the
> lock or not when you take a snapshot of the value of the lock. You
> need the whole time sequence in this case to figure it out and so will
> be more prone to error.

I still need to wake up, but I suspect we need to change
queue_read_{try,}lock() to use cmpxchg/inc_not_zero like things, which
is fine for ARM, but not so much for x86.

So I think I agree with Waiman, but am willing to be shown differently.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ