lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2c284b6d-99c1-2686-b8c0-fce8987e747f@redhat.com>
Date:   Wed, 26 Sep 2018 12:20:09 -0400
From:   Waiman Long <longman@...hat.com>
To:     Peter Zijlstra <peterz@...radead.org>, will.deacon@....com,
        mingo@...nel.org
Cc:     linux-kernel@...r.kernel.org, andrea.parri@...rulasolutions.com,
        tglx@...utronix.de
Subject: Re: [RFC][PATCH 0/3] locking/qspinlock: Improve determinism for x86

On 09/26/2018 07:01 AM, Peter Zijlstra wrote:
> Back when Will did his qspinlock determinism patches, we were left with one
> cmpxchg loop on x86 due to the use of atomic_fetch_or(). Will proposed a nifty
> trick:
>
>   http://lkml.kernel.org/r/20180409145409.GA9661@arm.com
>
> But at the time we didn't pursue it. This series implements that and argues for
> its correctness. In particular it places an smp_mb__after_atomic() in
> between the two operations, which forces the load to come after the
> store (which is free on x86 anyway).
>
> In particular this ordering ensures a concurrent unlock cannot trigger
> the uncontended handoff. Also it ensures that if the xchg() happens
> after a (successful) trylock, we must observe that LOCKED bit.

When you said "concurrent unlock cannot trigger the uncontended
handoff", are you saying the current code has this uncontended handoff
problem or just when comparing to doing a load first followed by xchg()?

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ