[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141215140000.GB6590@pd.tnic>
Date: Mon, 15 Dec 2014 15:00:00 +0100
From: Borislav Petkov <bp@...en8.de>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Dave Jones <davej@...hat.com>, Chris Mason <clm@...com>,
Mike Galbraith <umgwanakikbuti@...il.com>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Dâniel Fraga <fragabr@...il.com>,
Sasha Levin <sasha.levin@...cle.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Suresh Siddha <sbsiddha@...il.com>,
Oleg Nesterov <oleg@...hat.com>,
Peter Anvin <hpa@...ux.intel.com>
Subject: Re: frequent lockups in 3.18rc4
On Sun, Dec 14, 2014 at 09:47:26PM -0800, Linus Torvalds wrote:
> and "save_xstate_sig+0x81" shows up on all stacks, although only on
> CPU1 does it show up as a "guaranteed" part of the stack chain (ie it
> matches frame pointer data too). CPU1 also has that __clear_user show
> up (which is called from save_xstate_sig), but not other CPU's. CPU2
> and CPU3 have "save_xstate_sig+0x98" in addition to that +0x81 thing.
>
> My guess is that "save_xstate_sig+0x81" is the instruction after the
> __clear_user call, and that CPU1 took the fault in __clear_user(),
> while CPU2 and CPU3 took the fault at "save_xstate_sig+0x98" instead,
> which I'd guess is the
>
> xsave64 (%rdi)
Err, maybe a wild guess, but could XSAVE be encountering some problems,
like store ordering violations or somesuch?
Quick search shows
"AZ72. Store Ordering Violation When Using XSAVE"
here http://download.intel.com/design/mobile/specupdt/320121.pdf which
talks about SSE context stores happening out of order. Now, there are a
lot of IFs like does Dave's machine even have the erratum and even if,
would that erratum cause some sort of a livelock leading to the kernel
lockups and so on and so on...
It might be worth to rule out though.
--
Regards/Gruss,
Boris.
Sent from a fat crate under my desk. Formatting is fine.
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists