lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 21 Aug 2017 21:42:46 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Will Deacon <will.deacon@....com>
Cc:     Waiman Long <longman@...hat.com>, Ingo Molnar <mingo@...hat.com>,
        linux-kernel@...r.kernel.org,
        Pan Xinhui <xinhui@...ux.vnet.ibm.com>,
        Boqun Feng <boqun.feng@...il.com>,
        Andrea Parri <parri.andrea@...il.com>,
        Paul McKenney <paulmck@...ux.vnet.ibm.com>
Subject: Re: [RESEND PATCH v5] locking/pvqspinlock: Relax cmpxchg's to
 improve performance on some archs

On Mon, Aug 21, 2017 at 09:25:50PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 21, 2017 at 07:00:02PM +0100, Will Deacon wrote:
> > > No, I meant _from_ the LL load, not _to_ a later load.
> > 
> > Sorry, I'm still not following enough to give you a definitive answer on
> > that. Could you give an example, please? These sequences usually run in
> > a loop, so the conditional branch back (based on the status flag) is where
> > the read-after-read comes in.
> > 
> > Any control dependencies from the loaded data exist regardless of the status
> > flag.
> 
> Basically what Waiman ended up doing, something like:
> 
>         if (cmpxchg_relaxed(&pn->state, vcpu_halted, vcpu_hashed) != vcpu_halted)
>                 return;
> 
>         WRITE_ONCE(l->locked, _Q_SLOW_VAL);
> 
> Where the STORE depends on the LL value being 'complete'.
> 
> 
> For any RmW we can only create a control dependency from the LOAD. The
> the same could be done for something like:
> 
> 	if (atomic_inc_not_zero(&obj->refs))
> 		WRITE_ONCE(obj->foo, 1);

Obviously I meant the hypothetical atomic_inc_not_zero_relaxed() here,
otherwise all the implied smp_mb() spoil the game.

> Where we only do the STORE if we acquire the reference. While the
> WRITE_ONCE() will not be ordered against the increment, it is ordered
> against the LL and we know it must not be 0.
> 
> Per the LL/SC loop we'll have observed a !0 value and committed the SC
> (which need not be visible or ordered against any later store) but both
> STORES (SC and the WRITE_ONCE) must be after the ->refs LOAD.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ