[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170810115034.ie65wfxepiq6noew@hirez.programming.kicks-ass.net>
Date: Thu, 10 Aug 2017 13:50:34 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Waiman Long <longman@...hat.com>
Cc: Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
Pan Xinhui <xinhui@...ux.vnet.ibm.com>,
Boqun Feng <boqun.feng@...il.com>,
Andrea Parri <parri.andrea@...il.com>
Subject: Re: [RESEND PATCH v5] locking/pvqspinlock: Relax cmpxchg's to
improve performance on some archs
On Wed, May 24, 2017 at 09:38:28AM -0400, Waiman Long wrote:
>
> # of thread w/o patch with patch % Change
> ----------- --------- ---------- --------
> 4 4053.3 Mop/s 4223.7 Mop/s +4.2%
> 8 3310.4 Mop/s 3406.0 Mop/s +2.9%
> 12 2576.4 Mop/s 2674.6 Mop/s +3.8%
Waiman, could you run those numbers again but with the below 'fixed' ?
> @@ -361,6 +361,13 @@ static void pv_kick_node(struct qspinlock *lock, struct mcs_spinlock *node)
> * observe its next->locked value and advance itself.
> *
> * Matches with smp_store_mb() and cmpxchg() in pv_wait_node()
> + *
> + * The write to next->locked in arch_mcs_spin_unlock_contended()
> + * must be ordered before the read of pn->state in the cmpxchg()
> + * below for the code to work correctly. However, this is not
> + * guaranteed on all architectures when the cmpxchg() call fails.
> + * Both x86 and PPC can provide that guarantee, but other
> + * architectures not necessarily.
> */
smp_mb();
> if (cmpxchg(&pn->state, vcpu_halted, vcpu_hashed) != vcpu_halted)
> return;
Ideally this Power CPU can optimize back-to-back SYNC instructions, but
who knows...
Powered by blists - more mailing lists