[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZIECZMHxtEYnuBAJ@araj-dh-work>
Date: Wed, 7 Jun 2023 15:19:16 -0700
From: Ashok Raj <ashok_raj@...ux.intel.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>, x86@...nel.org,
Ashok Raj <ashok.raj@...ux.intel.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Tony Luck <tony.luck@...el.com>,
Arjan van de Veen <arjan@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Eric Biederman <ebiederm@...ssion.com>,
Ashok Raj <ashok.raj@...el.com>
Subject: Re: [patch 0/6] Cure kexec() vs. mwait_play_dead() troubles
On Wed, Jun 07, 2023 at 10:33:35AM -0700, Sean Christopherson wrote:
> On Wed, Jun 07, 2023, Ashok Raj wrote:
> > On Tue, Jun 06, 2023 at 12:41:43AM +0200, Thomas Gleixner wrote:
> > > On Mon, Jun 05 2023 at 10:41, Sean Christopherson wrote:
> > > > On Sat, Jun 03, 2023, Thomas Gleixner wrote:
> > > >> This is only half safe because HLT can resume execution due to NMI, SMI and
> > > >> MCE. Unfortunately there is no real safe mechanism to "park" a CPU reliably,
> > > >
> > > > On Intel. On AMD, enabling EFER.SVME and doing CLGI will block everything except
> > > > single-step #DB (lol) and RESET. #MC handling is implementation-dependent and
> > > > *might* cause shutdown, but at least there's a chance it will work. And presumably
> > > > modern CPUs do pend the #MC until GIF=1.
> > >
> > > Abusing SVME for that is definitely in the realm of creative bonus
> > > points, but not necessarily a general purpose solution.
> > >
> > > >> So parking them via INIT is not completely solving the problem, but it
> > > >> takes at least NMI and SMI out of the picture.
> > > >
> > > > Don't most SMM handlers rendezvous all CPUs? I.e. won't blocking SMIs indefinitely
> > > > potentially cause problems too?
> > >
> > > Not that I'm aware of. If so then this would be a hideous firmware bug
> > > as firmware must be aware of CPUs which hang around in INIT independent
> > > of this.
> >
> > SMM does do the rendezvous of all CPUs, but also has a way to detect the
> > blocked ones, in WFS via some package scoped ubox register. So it knows to
> > skip those. I can find this in internal sources, but they aren't available
> > in the edk2 open reference code. They happen to be documented only in the
> > BWG, which isn't available freely.
>
> Ah, so putting CPUs into WFS shouldn't result in odd delays. At least not on
> bare metal. Hmm, and AFAIK the primary use case for SMM in VMs is for secure
Never knew SMM had any role in VM's.. I thought SMM was always native.
Who owns this SMM for VM's.. from the VirtualBIOS?
> boot, so taking SMIs after booting and putting CPUs back into WFS should be ok-ish.
>
> Finding a victim to test this in a QEMU VM w/ Secure Boot would be nice to have.
I always seem to turn off secureboot installing Ubuntu :-).. I'll try to
find someone who might know especially doing SMM In VM.
Can you tell what needs to be validated in the guest? Would doing kexec
inside the guest with the new patch set be sufficient?
Or you mean in guest, do a kexec and launch secure boot of new kernel?
If there is a specific test you want done, let me know.
Powered by blists - more mailing lists