[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140408161023.GP10526@twins.programming.kicks-ass.net>
Date: Tue, 8 Apr 2014 18:10:23 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Sasha Levin <sasha.levin@...cle.com>
Cc: Ingo Molnar <mingo@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Dave Jones <davej@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: sched: long running interrupts breaking spinlocks
On Tue, Apr 08, 2014 at 11:26:56AM -0400, Sasha Levin wrote:
> Hi all,
>
> (all the below happened inside mm/ code, so while I don't suspect
> it's a mm/ issue you folks got cc'ed anyways!)
>
> While fuzzing with trinity inside a KVM tools guest running the latest -next
> kernel, I've stumbled on the following:
>
> [ 4071.166362] BUG: spinlock lockup suspected on CPU#19, trinity-c19/17092
That's a heuristic in the spinlock code; triggering it with big machines
(19 cpus is far bigger than anything at the time that code was written)
and virt (yay for lock owner preemption; another thing we didn't have
back when) is trivial.
I'd not worry too much about this.
So DEBUG_SPINLOCKS turns spin_lock() into something like:
for (i = 0; i < loops; i++)
if (spin_trylock())
return;
/* complain */
And you simply ran out of loops.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists