[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1393994738.2512.48.camel@buesod1.americas.hpqcorp.net>
Date: Tue, 04 Mar 2014 20:45:38 -0800
From: Davidlohr Bueso <davidlohr@...com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
Darren Hart <dvhart@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: futex funkiness -- massive lockups
On Tue, 2014-03-04 at 19:36 -0800, Linus Torvalds wrote:
> On Tue, Mar 4, 2014 at 5:43 PM, Davidlohr Bueso <davidlohr@...com> wrote:
> >
> >
> > From the paths related to futex wait we are stuck when taking the hb
> > spinlock in futex_wait_setup >> queue_lock.
>
> Just judging from your trace, I would have suspected a (possibly soft)
> lockup in load_balance() rather than the futexes.
>
> The futex being stuck seems expected, since one cpu is definitely
> holding the lock - it was interrupted by a timer interrupt at the
> successful return case of raw_spin_lock if I read the offset right.
>
> So if that softirq is stuck - perhaps because it's in some endless
> loop in load_balance(), or perhaps because it's spending so much time
> load-balancing that the next balancing time happens immediately, or
> whatever - then you'd see that trace.
That does make a lot of sense, and since this is a futex intense
workload, it would also explain why I'm seeing so many CPUs stuck
waiting for the lock in the futex wait paths, it's the same hash bucket
and it's stuck just doing the cmpxchg over and over again.
Unfortunately the machine code dump is missing in for the load balancing
bits so it's pretty hard to see right away where the trapping
instruction would occur.
Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists