lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 26 Apr 2023 01:00:31 +0200
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Dave Hansen <dave.hansen@...el.com>,
        Tony Battersby <tonyb@...ernetics.com>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org
Cc:     "H. Peter Anvin" <hpa@...or.com>,
        Mario Limonciello <mario.limonciello@....com>,
        Tom Lendacky <thomas.lendacky@....com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

On Tue, Apr 25 2023 at 15:29, Dave Hansen wrote:
> On 4/25/23 14:05, Thomas Gleixner wrote:
>> The only consequence of looking at bit 0 of some random other leaf is
>> that all CPUs which run stop_this_cpu() issue WBINVD in parallel, which
>> is slow but should not be a fatal issue.
>> 
>> Tony observed this is a 50% chance to hang, which means this is a timing
>> issue.
>
> I _think_ the system in question is a dual-socket Westmere.  I don't see
> any obvious errata that we could pin this on:
>
>> https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-5600-specification-update.pdf
>
> Andi Kleen had an interesting theory.  WBINVD is a pretty expensive
> operation.  It's possible that it has some degenerative behavior when
> it's called on a *bunch* of CPUs all at once (which this path can do).
> If the instruction takes too long, it could trigger one of the CPU's
> internal lockup detectors and trigger a machine check.  At that point,
> all hell breaks loose.
>
> I don't know the cache coherency protocol well enough to say for sure,
> but I wonder if there's a storm of cache coherency traffic as all those
> lines get written back.  One of the CPUs gets starved from making enough
> forward progress and trips a CPU-internal watchdog.
>
> Andi also says that it _should_ log something in the machine check banks
> when this happens so there should be at least some kind of breadcrumb.
>
> Either way, I'm hoping this hand waving satiates tglx's morbid curiosity
> about hardware that came out from before I even worked at Intel. ;)

No, it does not. :)

There is no reason to believe that this is just a problem of CPUs which
were released long time ago.

If there is an issue with concurrent WBINVD then this needs to be
addressed independently of Tony's observations.

Aside of that the allowance for the control CPU to make progress based
on the early clearing of the CPU online bit is still a possibility to
explain the wreckage just based on timing.

The reason why I insist on a proper analysis is definitely not morbid
curiosity. The real reason is that I fundamentally hate problems being
handwaved away.

It's a matter of fact that all problems which are not root caused keep
coming back and not necessarily in debuggable ways. Tony's 50% case is
golden compared to the once in a blue moon issues.

I outlined the debug options already. So just throw them at the problem
instead of indulging in handwaing theories.

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ