lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZICuhZHCqSYvR4IO@araj-dh-work>
Date:   Wed, 7 Jun 2023 09:21:25 -0700
From:   Ashok Raj <ashok_raj@...ux.intel.com>
To:     Thomas Gleixner <tglx@...utronix.de>
Cc:     Sean Christopherson <seanjc@...gle.com>,
        LKML <linux-kernel@...r.kernel.org>, x86@...nel.org,
        Ashok Raj <ashok.raj@...ux.intel.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Tony Luck <tony.luck@...el.com>,
        Arjan van de Veen <arjan@...ux.intel.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Eric Biederman <ebiederm@...ssion.com>,
        Ashok Raj <ashok.raj@...el.com>
Subject: Re: [patch 0/6] Cure kexec() vs. mwait_play_dead() troubles

On Tue, Jun 06, 2023 at 12:41:43AM +0200, Thomas Gleixner wrote:
> On Mon, Jun 05 2023 at 10:41, Sean Christopherson wrote:
> > On Sat, Jun 03, 2023, Thomas Gleixner wrote:
> >> This is only half safe because HLT can resume execution due to NMI, SMI and
> >> MCE. Unfortunately there is no real safe mechanism to "park" a CPU reliably,
> >
> > On Intel.  On AMD, enabling EFER.SVME and doing CLGI will block everything except
> > single-step #DB (lol) and RESET.  #MC handling is implementation-dependent and
> > *might* cause shutdown, but at least there's a chance it will work.  And presumably
> > modern CPUs do pend the #MC until GIF=1.
> 
> Abusing SVME for that is definitely in the realm of creative bonus
> points, but not necessarily a general purpose solution.
> 
> >> So parking them via INIT is not completely solving the problem, but it
> >> takes at least NMI and SMI out of the picture.
> >
> > Don't most SMM handlers rendezvous all CPUs?  I.e. won't blocking SMIs indefinitely
> > potentially cause problems too?
> 
> Not that I'm aware of. If so then this would be a hideous firmware bug
> as firmware must be aware of CPUs which hang around in INIT independent
> of this.

SMM does do the rendezvous of all CPUs, but also has a way to detect the
blocked ones, in WFS via some package scoped ubox register. So it knows to
skip those. I can find this in internal sources, but they aren't available
in the edk2 open reference code. They happen to be documented only in the
BWG, which isn't available freely.

I believe its behind the GetSmmDelayedBlockedDisabledCount()->
	SmmCpuFeaturesGetSmmRegister() 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ