[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <945c28c3-5779-c8c8-13bb-40477abd1f0e@redhat.com>
Date: Thu, 10 Aug 2017 09:27:10 -0400
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
Pan Xinhui <xinhui@...ux.vnet.ibm.com>,
Boqun Feng <boqun.feng@...il.com>,
Andrea Parri <parri.andrea@...il.com>
Subject: Re: [RESEND PATCH v5] locking/pvqspinlock: Relax cmpxchg's to improve
performance on some archs
On 08/10/2017 07:50 AM, Peter Zijlstra wrote:
> On Wed, May 24, 2017 at 09:38:28AM -0400, Waiman Long wrote:
>> # of thread w/o patch with patch % Change
>> ----------- --------- ---------- --------
>> 4 4053.3 Mop/s 4223.7 Mop/s +4.2%
>> 8 3310.4 Mop/s 3406.0 Mop/s +2.9%
>> 12 2576.4 Mop/s 2674.6 Mop/s +3.8%
> Waiman, could you run those numbers again but with the below 'fixed' ?
>
>> @@ -361,6 +361,13 @@ static void pv_kick_node(struct qspinlock *lock, struct mcs_spinlock *node)
>> * observe its next->locked value and advance itself.
>> *
>> * Matches with smp_store_mb() and cmpxchg() in pv_wait_node()
>> + *
>> + * The write to next->locked in arch_mcs_spin_unlock_contended()
>> + * must be ordered before the read of pn->state in the cmpxchg()
>> + * below for the code to work correctly. However, this is not
>> + * guaranteed on all architectures when the cmpxchg() call fails.
>> + * Both x86 and PPC can provide that guarantee, but other
>> + * architectures not necessarily.
>> */
> smp_mb();
>
>> if (cmpxchg(&pn->state, vcpu_halted, vcpu_hashed) != vcpu_halted)
>> return;
> Ideally this Power CPU can optimize back-to-back SYNC instructions, but
> who knows...
Yes, I can run the numbers again. However, the changes here is in the
slowpath. My current patch optimizes the fast path only and my original
test doesn't stress the slowpath at all, I think. I will have to make
some changes to stress the slowpath.
Cheers,
Longman
Powered by blists - more mailing lists