lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <73daa6e6-537e-b0ce-e1e0-7afa75334509@redhat.com>
Date:   Thu, 10 Aug 2017 14:18:30 -0400
From:   Waiman Long <longman@...hat.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
        Pan Xinhui <xinhui@...ux.vnet.ibm.com>,
        Boqun Feng <boqun.feng@...il.com>,
        Andrea Parri <parri.andrea@...il.com>
Subject: Re: [RESEND PATCH v5] locking/pvqspinlock: Relax cmpxchg's to improve
 performance on some archs

On 08/10/2017 12:22 PM, Waiman Long wrote:
> On 08/10/2017 12:15 PM, Peter Zijlstra wrote:
>> On Thu, Aug 10, 2017 at 09:58:57AM -0400, Waiman Long wrote:
>>> On 08/10/2017 09:27 AM, Waiman Long wrote:
>>>> On 08/10/2017 07:50 AM, Peter Zijlstra wrote:
>>>>> On Wed, May 24, 2017 at 09:38:28AM -0400, Waiman Long wrote:
>>>>>>   # of thread     w/o patch    with patch      % Change
>>>>>>   -----------     ---------    ----------      --------
>>>>>>        4         4053.3 Mop/s  4223.7 Mop/s     +4.2%
>>>>>>        8         3310.4 Mop/s  3406.0 Mop/s     +2.9%
>>>>>>       12         2576.4 Mop/s  2674.6 Mop/s     +3.8%
>>>>> Waiman, could you run those numbers again but with the below 'fixed' ?
>>>>>
>>>>>> @@ -361,6 +361,13 @@ static void pv_kick_node(struct qspinlock *lock, struct mcs_spinlock *node)
>>>>>>  	 * observe its next->locked value and advance itself.
>>>>>>  	 *
>>>>>>  	 * Matches with smp_store_mb() and cmpxchg() in pv_wait_node()
>>>>>> +	 *
>>>>>> +	 * The write to next->locked in arch_mcs_spin_unlock_contended()
>>>>>> +	 * must be ordered before the read of pn->state in the cmpxchg()
>>>>>> +	 * below for the code to work correctly. However, this is not
>>>>>> +	 * guaranteed on all architectures when the cmpxchg() call fails.
>>>>>> +	 * Both x86 and PPC can provide that guarantee, but other
>>>>>> +	 * architectures not necessarily.
>>>>>>  	 */
>>>>> 	smp_mb();
>>>>>
>>>>>>  	if (cmpxchg(&pn->state, vcpu_halted, vcpu_hashed) != vcpu_halted)
>>>>>>  		return;
>>>>> Ideally this Power CPU can optimize back-to-back SYNC instructions, but
>>>>> who knows...
>>>> Yes, I can run the numbers again. However, the changes here is in the
>>>> slowpath. My current patch optimizes the fast path only and my original
>>>> test doesn't stress the slowpath at all, I think. I will have to make
>>>> some changes to stress the slowpath.
>>> Looking at past emails, I remember why I put the comment there. Putting
>>> an smp_mb() here will definitely has an negative performance impact on
>>> x86. So I put in the comment here to remind me that the current code may
>>> not work for ARM64.
>>>
>>> To fix that, my current thought is to have a cmpxchg variant that
>>> guarantees ordering for both success and failure, for example,
>>> cmpxchg_ordered(). In that way, we only need to insert the barrier for
>>> architectures that need it. That will be a separate patch instead of
>>> integrating into this one.
>> Might as well do an explicit:
>>
>> 	smp_mb__before_atomic()
>> 	cmpxchg_relaxed()
>> 	smp_mb__after_atomic()
>>
>> I suppose and not introduce new primitives.

I think we don't need smp_mb__after_atomic(). The read has to be fully
ordered, but the write part may not need it as the control dependency of
the old value should guard against incorrect action. Right?

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ