[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOGi=dMud6TANP5AQP06paToJYZvFtteBBUzPHJY5_JXiruHhQ@mail.gmail.com>
Date: Tue, 20 Oct 2015 11:00:18 +0800
From: Ling Ma <ling.ma.program@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...hat.com, linux-kernel@...r.kernel.org,
Ma Ling <ling.ml@...baba-inc.com>, waiman.long@....com
Subject: Re: [RFC PATCH] qspinlock: Improve performance by reducing load
instruction rollback
2015-10-19 17:33 GMT+08:00 Peter Zijlstra <peterz@...radead.org>:
> On Mon, Oct 19, 2015 at 10:27:22AM +0800, ling.ma.program@...il.com wrote:
>> From: Ma Ling <ling.ml@...baba-inc.com>
>>
>> All load instructions can run speculatively but they have to follow
>> memory order rule in multiple cores as below:
>> _x = _y = 0
>>
>> Processor 0 Processor 1
>>
>> mov r1, [ _y] //M1 mov [ _x], 1 //M3
>> mov r2, [ _x] //M2 mov [ _y], 1 //M4
>>
>> If r1 = 1, r2 must be 1
>>
>> In order to guarantee above rule, although Processor 0 execute
>> M1 and M2 instruction out of order, they are kept in ROB,
>> when load buffer for _x in Processor 0 received the update
>> message from Processor 1, Processor 0 need to roll back
>> from M2 instruction, which will flush the whole pipeline,
>> the latency is over the penalty from branch prediction miss.
>>
>> In this patch we use lock cmpxchg instruction to force load
>> instructions to be serialization, the destination operand
>> receives a write cycle without regard to the result of
>> the comparison, which can help us to reduce the penalty
>> from load instruction roll back.
>>
>> Our experiment indicates the performance can be improved by 10%~15%
>> for 2 and 3 threads cases, the conflicts from lock cache line
>> spend them most of the time.
>
> On what hardware? Also, you forgot to Cc Waiman, who is a prime author
> of this code. Excessive quoting for his benefit.
>
>> Signed-off-by: Ma Ling <ling.ml@...baba-inc.com>
>> ---
>> kernel/locking/qspinlock.c | 43 ++++++++++++++++++-------------------------
>> 1 files changed, 18 insertions(+), 25 deletions(-)
>>
>> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
>> index 87e9ce6..16421f2 100644
>> --- a/kernel/locking/qspinlock.c
>> +++ b/kernel/locking/qspinlock.c
>> @@ -332,25 +332,14 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>> if (new == _Q_LOCKED_VAL)
>> return;
>>
>> - /*
>> - * we're pending, wait for the owner to go away.
>> - *
>> - * *,1,1 -> *,1,0
>> + /* we're waiting, and get lock owner
>
> That's incorrect coding style
Ok, I will fix, thx.
>
>> *
>> - * this wait loop must be a load-acquire such that we match the
>> - * store-release that clears the locked bit and create lock
>> - * sequentiality; this is because not all clear_pending_set_locked()
>> - * implementations imply full barriers.
>> + * *,1,* -> *,0,1
>> */
>> - while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
>> + while (cmpxchg(&((struct __qspinlock *)lock)->locked_pending,
>> + _Q_PENDING_VAL, _Q_LOCKED_VAL) != _Q_PENDING_VAL)
>
> That's both horrible coding style and painful, we should not spin-wait
> with a cmpxchg instruction like that.
Ok I will fix
>
>> cpu_relax();
>> -
>> - /*
>> - * take ownership and clear the pending bit.
>> - *
>> - * *,1,0 -> *,0,1
>> - */
>> - clear_pending_set_locked(lock);
>> +
>> return;
>>
>> /*
>> @@ -399,17 +388,21 @@ queue:
>> * we're at the head of the waitqueue, wait for the owner & pending to
>> * go away.
>> *
>> - * *,x,y -> *,0,0
>> - *
>> - * this wait loop must use a load-acquire such that we match the
>> - * store-release that clears the locked bit and create lock
>> - * sequentiality; this is because the set_locked() function below
>> - * does not imply a full barrier.
>> - *
>> + * *,x,y -> *,0,1
>> */
>> pv_wait_head(lock, node);
>> - while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_PENDING_MASK)
>> + next = READ_ONCE(node->next);
>> + while (cmpxchg(&((struct __qspinlock *)lock)->locked_pending, 0,
>> + _Q_LOCKED_VAL) != 0) {
>
> idem
>
>> + next = READ_ONCE(node->next);
>> cpu_relax();
>> + }
>> +
>> + if (next)
>> + goto next_node;
>> +
>> + val = smp_load_acquire(&lock->val.counter);
>> + tail = tail | _Q_LOCKED_VAL;
>>
>> /*
>> * claim the lock:
>> @@ -423,7 +416,6 @@ queue:
>> */
>> for (;;) {
>> if (val != tail) {
>> - set_locked(lock);
>> break;
>> }
>> old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
>> @@ -439,6 +431,7 @@ queue:
>> while (!(next = READ_ONCE(node->next)))
>> cpu_relax();
>>
>> +next_node:
>> arch_mcs_spin_unlock_contended(&next->locked);
>> pv_kick_node(lock, next);
>>
>> --
>> 1.7.1
>>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists