[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161018123648.GV3117@twins.programming.kicks-ass.net>
Date: Tue, 18 Oct 2016 14:36:48 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Waiman Long <waiman.long@....com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Jason Low <jason.low2@....com>,
Ding Tianhong <dingtianhong@...wei.com>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <Will.Deacon@....com>,
Ingo Molnar <mingo@...hat.com>,
Imre Deak <imre.deak@...el.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Davidlohr Bueso <dave@...olabs.net>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Terry Rudd <terry.rudd@....com>,
"Paul E. McKenney" <paulmck@...ibm.com>,
Jason Low <jason.low2@...com>,
Chris Wilson <chris@...is-wilson.co.uk>,
Daniel Vetter <daniel.vetter@...ll.ch>
Subject: Re: [PATCH -v4 5/8] locking/mutex: Add lock handoff to avoid
starvation
On Mon, Oct 17, 2016 at 02:45:50PM -0400, Waiman Long wrote:
> On 10/07/2016 10:52 AM, Peter Zijlstra wrote:
> > /*
> > * Actual trylock that will work on any unlocked state.
> >+ *
> >+ * When setting the owner field, we must preserve the low flag bits.
> >+ *
> >+ * Be careful with @handoff, only set that in a wait-loop (where you set
> >+ * HANDOFF) to avoid recursive lock attempts.
> > */
> >-static inline bool __mutex_trylock(struct mutex *lock)
> >+static inline bool __mutex_trylock(struct mutex *lock, const bool handoff)
> > {
> > unsigned long owner, curr = (unsigned long)current;
> >
> > owner = atomic_long_read(&lock->owner);
> > for (;;) { /* must loop, can race against a flag */
> >- unsigned long old;
> >+ unsigned long old, flags = __owner_flags(owner);
> >+
> >+ if (__owner_task(owner)) {
> >+ if (handoff&& unlikely(__owner_task(owner) == current)) {
> >+ /*
> >+ * Provide ACQUIRE semantics for the lock-handoff.
> >+ *
> >+ * We cannot easily use load-acquire here, since
> >+ * the actual load is a failed cmpxchg, which
> >+ * doesn't imply any barriers.
> >+ *
> >+ * Also, this is a fairly unlikely scenario, and
> >+ * this contains the cost.
> >+ */
>
> I am not so sure about your comment here. I guess you are referring to the
> atomic_long_cmpxchg_acquire below for the failed cmpxchg. However, it is
> also possible that the path can be triggered on the first round without
> cmpxchg. Maybe we can do a load_acquire on the owner again to satisfy this
> requirement without a smp_mb().
Yes, I refer to the atomic_long_cmpxchg_acquire() below. If that cmpxchg
fails, no barriers are implied.
Yes we could fix that, but that would make all cmpxchg_acquire() loops
more expensive. And only fixing the initial load doesn't help, since we
still need to deal with the cmpxchg case failing.
We _could_ re-issue the load I suppose, because since if owner==current,
nobody else is going to change it, but I feel slightly uneasy with that.
Also, its a relative slow path, so for now, lets keep it simple and
explicit like so.
>
> >+ smp_mb(); /* ACQUIRE */
> >+ return true;
> >+ }
> >
> >- if (__owner_task(owner))
> > return false;
> >+ }
> >
> >- old = atomic_long_cmpxchg_acquire(&lock->owner, owner,
> >- curr | __owner_flags(owner));
> >+ /*
> >+ * We set the HANDOFF bit, we must make sure it doesn't live
> >+ * past the point where we acquire it. This would be possible
> >+ * if we (accidentally) set the bit on an unlocked mutex.
> >+ */
> >+ if (handoff)
> >+ flags&= ~MUTEX_FLAG_HANDOFF;
> >+
> >+ old = atomic_long_cmpxchg_acquire(&lock->owner, owner, curr | flags);
> > if (old == owner)
> > return true;
> >
> >
>
> Other than that, the code is fine.
>
> Reviewed-by: Waiman Long <Waiman.Long@....com>
Thanks!
Powered by blists - more mailing lists