[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c8e023cb-6f50-36f5-65d4-c5e25b264029@intel.com>
Date: Mon, 19 Jul 2021 12:20:24 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Rik van Riel <riel@...riel.com>, linux-kernel@...r.kernel.org
Cc: Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>, kernel-team@...com,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org
Subject: Re: [PATCH] x86,mm: print likely CPU at segfault time
On 7/19/21 12:00 PM, Rik van Riel wrote:
> In a large enough fleet of computers, it is common to have a few bad
> CPUs. Those can often be identified by seeing that some commonly run
> kernel code (that runs fine everywhere else) keeps crashing on the
> same CPU core on a particular bad system.
I've encountered a few of these kinds of things over the years. This is
*definitely* useful. What you've proposed here is surely the simplest
thing we could print and probably also offers the best bang for our buck.
The only other thing I thought of is that it might be nice to print out
the core id instead of the CPU id. If there are hardware issues with a
CPU, they're likely to affect both threads. Seeing to different "CPUs"
in an SMT environment might tempt some folks to think it's not a
core-level hardware issue.
If it's as trivial as:
printk(KERN_CONT " on cpu/core %d/%d",
raw_smp_processor_id(),
topology_core_id(raw_smp_processor_id()));
it would be handy. But, it's also not hard to look at 10 segfaults, see
that they happened only on 2 CPUs and realize that hyperthreading is
enabled.
Either way, this patch moves things in the right direction, so:
Acked-by: Dave Hansen <dave.hansen@...ux.intel.com>
Powered by blists - more mailing lists