[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160818142735.GB10121@twins.programming.kicks-ass.net>
Date: Thu, 18 Aug 2016 16:27:35 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Jason Low <jason.low2@....com>
Cc: Ingo Molnar <mingo@...hat.com>, imre.deak@...el.com,
linux-kernel@...r.kernel.org, Jason Low <jason.low2@...com>,
Waiman Long <Waiman.Long@....com>,
Davidlohr Bueso <dave@...olabs.net>,
Tim Chen <tim.c.chen@...ux.intel.com>, terry.rudd@....com,
"Paul E. McKenney" <paulmck@...ibm.com>
Subject: Re: [PATCH v2] locking/mutex: Prevent lock starvation when spinning
is enabled
On Wed, Aug 10, 2016 at 11:44:08AM -0700, Jason Low wrote:
> @@ -556,8 +604,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
> * other waiters. We only attempt the xchg if the count is
> * non-negative in order to avoid unnecessary xchg operations:
> */
> - if (atomic_read(&lock->count) >= 0 &&
> + if ((!need_yield_to_waiter(lock) || wakeups > 1) &&
> + atomic_read(&lock->count) >= 0 &&
> (atomic_xchg_acquire(&lock->count, -1) == 1))
> + if (wakeups > 1)
> + clear_yield_to_waiter(lock);
> +
> break;
>
> /*
There's some { } gone missing there...
Also, I think I'll change it to avoid that extra wakeups > 1 condition..
Powered by blists - more mailing lists