[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1389813781.2944.77.camel@j-VirtualBox>
Date: Wed, 15 Jan 2014 11:23:01 -0800
From: Jason Low <jason.low2@...com>
To: Waiman Long <waiman.long@...com>
Cc: mingo@...hat.com, peterz@...radead.org, paulmck@...ux.vnet.ibm.com,
torvalds@...ux-foundation.org, tglx@...utronix.de,
linux-kernel@...r.kernel.org, riel@...hat.com,
akpm@...ux-foundation.org, davidlohr@...com, hpa@...or.com,
aswin@...com, scott.norton@...com
Subject: Re: [RFC 2/3] mutex: Modify the way optimistic spinners are queued
On Wed, 2014-01-15 at 10:10 -0500, Waiman Long wrote:
> On 01/14/2014 07:33 PM, Jason Low wrote:
> > * When there's no owner, we might have preempted between the
> > @@ -503,8 +504,10 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
> > * we're an RT task that will live-lock because we won't let
> > * the owner complete.
> > */
> > - if (!owner&& (need_resched() || rt_task(task)))
> > + if (!owner&& (need_resched() || rt_task(task))) {
> > + mspin_unlock(MLOCK(lock),&node);
> > goto slowpath;
> > + }
> >
> > /*
> > * The cpu_relax() call is a compiler barrier which forces
>
> Maybe you can consider restructure the code as follows to reduce the
> number of mspin_unlock() call sites:
Yeah, I would prefer your method of using break and having the
mspin_unlock() at the end of the loop, now that it would result in less
# of mspin_unlock().
Commit ec83f425dbca47e19c6737e8e7db0d0924a5de1b changed break to
slowpath to make it more intuitive to read, but with this patch, there
are benefits to using break.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists