lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210106232347.GG2743@paulmck-ThinkPad-P72>
Date:   Wed, 6 Jan 2021 15:23:47 -0800
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     "Luck, Tony" <tony.luck@...el.com>
Cc:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "x86@...nel.org" <x86@...nel.org>,
        "linux-edac@...r.kernel.org" <linux-edac@...r.kernel.org>,
        "bp@...en8.de" <bp@...en8.de>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "hpa@...or.com" <hpa@...or.com>,
        "kernel-team@...com" <kernel-team@...com>
Subject: Re: [PATCH RFC x86/mce] Make mce_timed_out() identify holdout CPUs

On Wed, Jan 06, 2021 at 02:49:18PM -0800, Luck, Tony wrote:
> On Wed, Jan 06, 2021 at 11:17:08AM -0800, Paul E. McKenney wrote:
> > On Wed, Jan 06, 2021 at 06:39:30PM +0000, Luck, Tony wrote:
> > > > The "Timeout: Not all CPUs entered broadcast exception handler" message
> > > > will appear from time to time given enough systems, but this message does
> > > > not identify which CPUs failed to enter the broadcast exception handler.
> > > > This information would be valuable if available, for example, in order to
> > > > correlated with other hardware-oriented error messages.  This commit
> > > > therefore maintains a cpumask_t of CPUs that have entered this handler,
> > > > and prints out which ones failed to enter in the event of a timeout.
> > > 
> > > I tried doing this a while back, but found that in my test case where I forced
> > > an error that would cause both threads from one core to be "missing", the
> > > output was highly unpredictable. Some random number of extra CPUs were
> > > reported as missing. After I added some extra breadcrumbs it became clear
> > > that pretty much all the CPUs (except the missing pair) entered do_machine_check(),
> > > but some got hung up at various points beyond the entry point. My only theory
> > > was that they were trying to snoop caches from the dead core (or access some
> > > other resource held by the dead core) and so they hung too.
> > > 
> > > Your code is much neater than mine ... and perhaps works in other cases, but
> > > maybe the message needs to allow for the fact that some of the cores that
> > > are reported missing may just be collateral damage from the initial problem.
> > 
> > Understood.  The system is probably not in the best shape if this code
> > is ever executed, after all.  ;-)
> > 
> > So how about like this?
> > 
> > 	pr_info("%s: MCE holdout CPUs (may include false positives): %*pbl\n",
> 
> That looks fine.
> > 
> > Easy enough if so!
> > 
> > > If I get time in the next day or two, I'll run my old test against your code to
> > > see what happens.
> 
> I got time today (plenty of meetings in which to run experiments in background).

Thank you very much!

> This code:
> 
> -               if (mca_cfg.tolerant <= 1)
> +               if (mca_cfg.tolerant <= 1) {
> +                       if (!cpumask_andnot(&mce_missing_cpus, cpu_online_mask, &mce_present_cpus))
> +                               pr_info("%s: MCE holdout CPUs: %*pbl\n",
> +                                       __func__, cpumask_pr_args(&mce_missing_cpus));
>                         mce_panic(msg, NULL, NULL);
> 
> didn't trigger ... so maybe that cpumask_andnot() didn't return the value you expected?
> 
> I added a:
> 
> +                       pr_info("%s: MCE present CPUs: %*pbl\n", __func__, cpumask_pr_args(&mce_present_cpus));
> 
> to check that the mask was being set correctly, and saw:
> 
> [  219.329767] mce: mce_timed_out: MCE present CPUs: 0-23,48-119,144-191
> 
> So the every core of socket 1 failed to show up for this test.

I'll say that cpumask_andnot() didn't return the value I expected!
Mostly because idiot here somehow interpreted "If *@...p is empty,
returns 0, else returns 1" as "Returns true if *dstp is empty".  So the
check is backwards.

Please see below for an updated patch.

> > For my own testing, is this still the right thing to use?
> > 
> > 	https://github.com/andikleen/mce-inject
> 
> That fakes up errors (by hooking into the mce_rdmsr() code to return arbitrary
> user supplied values).  The plus side of this is that you can fake any error
> signature without needing special h/w or f/w. The downside is that it is all fake
> and you can't create situations where some CPUs don't show up in the machine
> check handler.

So I would need to modify the code to test the code.  I have done worse
things, I suppose.  ;-)

							Thanx, Paul

------------------------------------------------------------------------

x86/mce: Make mce_timed_out() identify holdout CPUs

The "Timeout: Not all CPUs entered broadcast exception handler" message
will appear from time to time given enough systems, but this message does
not identify which CPUs failed to enter the broadcast exception handler.
This information would be valuable if available, for example, in order to
correlated with other hardware-oriented error messages.  This commit
therefore maintains a cpumask_t of CPUs that have entered this handler,
and prints out which ones failed to enter in the event of a timeout.

Cc: Tony Luck <tony.luck@...el.com>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: "H. Peter Anvin" <hpa@...or.com>
Cc: <x86@...nel.org>
Cc: <linux-edac@...r.kernel.org>
[ paulmck: Fix cpumask_andnot() check + message per Tony Luck feedback. ]
Signed-off-by: Paul E. McKenney <paulmck@...nel.org>

diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 13d3f1c..7a6e1f3 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -878,6 +878,12 @@ static atomic_t mce_executing;
 static atomic_t mce_callin;
 
 /*
+ * Track which CPUs entered and not in order to print holdouts.
+ */
+static cpumask_t mce_present_cpus;
+static cpumask_t mce_missing_cpus;
+
+/*
  * Check if a timeout waiting for other CPUs happened.
  */
 static int mce_timed_out(u64 *t, const char *msg)
@@ -894,8 +900,12 @@ static int mce_timed_out(u64 *t, const char *msg)
 	if (!mca_cfg.monarch_timeout)
 		goto out;
 	if ((s64)*t < SPINUNIT) {
-		if (mca_cfg.tolerant <= 1)
+		if (mca_cfg.tolerant <= 1) {
+			if (cpumask_andnot(&mce_missing_cpus, cpu_online_mask, &mce_present_cpus))
+				pr_info("%s: MCE holdout CPUs (may include false positives): %*pbl\n",
+					__func__, cpumask_pr_args(&mce_missing_cpus));
 			mce_panic(msg, NULL, NULL);
+		}
 		cpu_missing = 1;
 		return 1;
 	}
@@ -1006,6 +1016,7 @@ static int mce_start(int *no_way_out)
 	 * is updated before mce_callin.
 	 */
 	order = atomic_inc_return(&mce_callin);
+	cpumask_set_cpu(smp_processor_id(), &mce_present_cpus);
 
 	/*
 	 * Wait for everyone.
@@ -1114,6 +1125,7 @@ static int mce_end(int order)
 reset:
 	atomic_set(&global_nwo, 0);
 	atomic_set(&mce_callin, 0);
+	cpumask_clear(&mce_present_cpus);
 	barrier();
 
 	/*
@@ -2712,6 +2724,7 @@ static void mce_reset(void)
 	atomic_set(&mce_executing, 0);
 	atomic_set(&mce_callin, 0);
 	atomic_set(&global_nwo, 0);
+	cpumask_clear(&mce_present_cpus);
 }
 
 static int fake_panic_get(void *data, u64 *val)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ