[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1425922630.2475.390.camel@j-VirtualBox>
Date: Mon, 09 Mar 2015 10:37:10 -0700
From: Jason Low <jason.low2@...com>
To: Sasha Levin <sasha.levin@...cle.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Davidlohr Bueso <dave@...olabs.net>,
Tim Chen <tim.c.chen@...ux.intel.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Michel Lespinasse <walken@...gle.com>,
LKML <linux-kernel@...r.kernel.org>,
Dave Jones <davej@...emonkey.org.uk>,
Ming Lei <ming.lei@...onical.com>, jason.low2@...com
Subject: Re: [PATCH] locking/rwsem: Fix lock optimistic spinning when owner
is not running
On Sat, 2015-03-07 at 13:17 -0500, Sasha Levin wrote:
> On 03/07/2015 02:45 AM, Jason Low wrote:
> > Fixes tip commit b3fd4f03ca0b (locking/rwsem: Avoid deceiving lock spinners).
> >
> > Ming reported soft lockups occurring when running xfstest due to
> > commit b3fd4f03ca0b.
> >
> > When doing optimistic spinning in rwsem, threads should stop spinning when
> > the lock owner is not running. While a thread is spinning on owner, if
> > the owner reschedules, owner->on_cpu returns false and we stop spinning.
> >
> > However, commit b3fd4f03ca0b essentially caused the check to get ignored
> > because when we break out of the spin loop due to !on_cpu, we continue
> > spinning if sem->owner != NULL.
> >
> > This patch fixes this by making sure we stop spinning if the owner is not
> > running. Furthermore, just like with mutexes, refactor the code such that
> > we don't have separate checks for owner_running(). This makes it more
> > straightforward in terms of why we exit the spin on owner loop and we
> > would also avoid needing to "guess" why we broke out of the loop to make
> > this more readable.
>
> That seems to solve the hangs I'm seeing as well.
Great, thanks for confirming this.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists