lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 6 Jan 2021 18:39:30 +0000
From:   "Luck, Tony" <tony.luck@...el.com>
To:     "paulmck@...nel.org" <paulmck@...nel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "x86@...nel.org" <x86@...nel.org>,
        "linux-edac@...r.kernel.org" <linux-edac@...r.kernel.org>
CC:     "bp@...en8.de" <bp@...en8.de>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "hpa@...or.com" <hpa@...or.com>,
        "kernel-team@...com" <kernel-team@...com>
Subject: RE: [PATCH RFC x86/mce] Make mce_timed_out() identify holdout CPUs

> The "Timeout: Not all CPUs entered broadcast exception handler" message
> will appear from time to time given enough systems, but this message does
> not identify which CPUs failed to enter the broadcast exception handler.
> This information would be valuable if available, for example, in order to
> correlated with other hardware-oriented error messages.  This commit
> therefore maintains a cpumask_t of CPUs that have entered this handler,
> and prints out which ones failed to enter in the event of a timeout.

I tried doing this a while back, but found that in my test case where I forced
an error that would cause both threads from one core to be "missing", the
output was highly unpredictable. Some random number of extra CPUs were
reported as missing. After I added some extra breadcrumbs it became clear
that pretty much all the CPUs (except the missing pair) entered do_machine_check(),
but some got hung up at various points beyond the entry point. My only theory
was that they were trying to snoop caches from the dead core (or access some
other resource held by the dead core) and so they hung too.

Your code is much neater than mine ... and perhaps works in other cases, but
maybe the message needs to allow for the fact that some of the cores that
are reported missing may just be collateral damage from the initial problem.

If I get time in the next day or two, I'll run my old test against your code to
see what happens.

-Tony

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ