[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5805218A.3050707@hpe.com>
Date: Mon, 17 Oct 2016 15:07:54 -0400
From: Waiman Long <waiman.long@....com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Linus Torvalds <torvalds@...ux-foundation.org>,
Jason Low <jason.low2@....com>,
Ding Tianhong <dingtianhong@...wei.com>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <Will.Deacon@....com>,
Ingo Molnar <mingo@...hat.com>,
Imre Deak <imre.deak@...el.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Davidlohr Bueso <dave@...olabs.net>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Terry Rudd <terry.rudd@....com>,
"Paul E. McKenney" <paulmck@...ibm.com>,
Jason Low <jason.low2@...com>,
Chris Wilson <chris@...is-wilson.co.uk>,
Daniel Vetter <daniel.vetter@...ll.ch>
Subject: Re: [PATCH -v4 5/8] locking/mutex: Add lock handoff to avoid starvation
On 10/17/2016 02:45 PM, Waiman Long wrote:
> On 10/07/2016 10:52 AM, Peter Zijlstra wrote:
>> /*
>> * Actual trylock that will work on any unlocked state.
>> + *
>> + * When setting the owner field, we must preserve the low flag bits.
>> + *
>> + * Be careful with @handoff, only set that in a wait-loop (where you
>> set
>> + * HANDOFF) to avoid recursive lock attempts.
>> */
>> -static inline bool __mutex_trylock(struct mutex *lock)
>> +static inline bool __mutex_trylock(struct mutex *lock, const bool
>> handoff)
>> {
>> unsigned long owner, curr = (unsigned long)current;
>>
>> owner = atomic_long_read(&lock->owner);
>> for (;;) { /* must loop, can race against a flag */
>> - unsigned long old;
>> + unsigned long old, flags = __owner_flags(owner);
>> +
>> + if (__owner_task(owner)) {
>> + if (handoff&& unlikely(__owner_task(owner) == current)) {
>> + /*
>> + * Provide ACQUIRE semantics for the lock-handoff.
>> + *
>> + * We cannot easily use load-acquire here, since
>> + * the actual load is a failed cmpxchg, which
>> + * doesn't imply any barriers.
>> + *
>> + * Also, this is a fairly unlikely scenario, and
>> + * this contains the cost.
>> + */
>
> I am not so sure about your comment here. I guess you are referring to
> the atomic_long_cmpxchg_acquire below for the failed cmpxchg. However,
> it is also possible that the path can be triggered on the first round
> without cmpxchg. Maybe we can do a load_acquire on the owner again to
> satisfy this requirement without a smp_mb().
>
>> + smp_mb(); /* ACQUIRE */
>> + return true;
>> + }
>>
>> - if (__owner_task(owner))
>> return false;
>> + }
>>
>> - old = atomic_long_cmpxchg_acquire(&lock->owner, owner,
>> - curr | __owner_flags(owner));
>> + /*
>> + * We set the HANDOFF bit, we must make sure it doesn't live
>> + * past the point where we acquire it. This would be possible
>> + * if we (accidentally) set the bit on an unlocked mutex.
>> + */
>> + if (handoff)
>> + flags&= ~MUTEX_FLAG_HANDOFF;
>> +
>> + old = atomic_long_cmpxchg_acquire(&lock->owner, owner, curr
>> | flags);
>> if (old == owner)
>> return true;
>>
>>
>
> Other than that, the code is fine.
>
> Reviewed-by: Waiman Long <Waiman.Long@....com>
>
One more thing, I think it may be worthwhile to add another comment
about what happens when the HANDOFF bit was set while we take the error
path (goto err). As the actual handoff is serialized by the wait_lock,
the code will still do the right thing. Either the next one in the queue
will be handed off or it will be unlocked if the queue is empty.
Cheers,
Longman
Powered by blists - more mailing lists