[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YK/VUPi+zFO6wFXB@google.com>
Date: Thu, 27 May 2021 17:22:24 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Tom Lendacky <thomas.lendacky@....com>
Cc: Peter Gonda <pgonda@...gle.com>, kvm list <kvm@...r.kernel.org>,
linux-kernel@...r.kernel.org, x86@...nel.org,
Paolo Bonzini <pbonzini@...hat.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Borislav Petkov <bp@...en8.de>, Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Brijesh Singh <brijesh.singh@....com>
Subject: Re: [PATCH] KVM: SVM: Do not terminate SEV-ES guests on GHCB
validation failure
On Thu, May 20, 2021, Tom Lendacky wrote:
> On 5/20/21 2:16 PM, Sean Christopherson wrote:
> > On Mon, May 17, 2021, Tom Lendacky wrote:
> >> On 5/14/21 6:06 PM, Peter Gonda wrote:
> >>> On Fri, May 14, 2021 at 1:22 PM Tom Lendacky <thomas.lendacky@....com> wrote:
> >>>>
> >>>> Currently, an SEV-ES guest is terminated if the validation of the VMGEXIT
> >>>> exit code and parameters fail. Since the VMGEXIT instruction can be issued
> >>>> from userspace, even though userspace (likely) can't update the GHCB,
> >>>> don't allow userspace to be able to kill the guest.
> >>>>
> >>>> Return a #GP request through the GHCB when validation fails, rather than
> >>>> terminating the guest.
> >>>
> >>> Is this a gap in the spec? I don't see anything that details what
> >>> should happen if the correct fields for NAE are not set in the first
> >>> couple paragraphs of section 4 'GHCB Protocol'.
> >>
> >> No, I don't think the spec needs to spell out everything like this. The
> >> hypervisor is free to determine its course of action in this case.
> >
> > The hypervisor can decide whether to inject/return an error or kill the guest,
> > but what errors can be returned and how they're returned absolutely needs to be
> > ABI between guest and host, and to make the ABI vendor agnostic the GHCB spec
> > is the logical place to define said ABI.
>
> For now, that is all we have for versions 1 and 2 of the spec. We can
> certainly extend it in future versions if that is desired.
>
> I would suggest starting a thread on what we would like to see in the next
> version of the GHCB spec on the amd-sev-snp mailing list:
>
> amd-sev-snp@...ts.suse.com
Will do, but in the meantime, I don't think we should merge a fix of any kind
until there is consensus on what the VMM behavior will be. IMO, fixing this in
upstream is not urgent; I highly doubt anyone is deploying SEV-ES in production
using a bleeding edge KVM.
> > For example, "injecting" #GP if the guest botched the GHCB on #VMGEXIT(CPUID) is
> > completely nonsensical. As is, a Linux guest appears to blindly forward the #GP,
> > which means if something does go awry KVM has just made debugging the guest that
> > much harder, e.g. imagine the confusion that will ensue if the end result is a
> > SIGBUS to userspace on CPUID.
>
> I see the point you're making, but I would also say that we probably
> wouldn't even boot successfully if the kernel can't handle, e.g., a CPUID
> #VC properly.
I agree that GHCB bugs in the guest will be fatal, but that doesn't give the VMM
carte blanche to do whatever it wants given bad input.
> A lot of what could go wrong with required inputs, not the values, but the
> required state being communicated, should have already been ironed out during
> development of whichever OS is providing the SEV-ES support.
Yes, but better on the kernel never having a regression is a losing proposition.
And it doesn't even necessarily require a regression, e.g. an existing memory
corruption bug elsewhere in the guest kernel (that escaped qualification) could
corrupt the GHCB. If the GHCB is corrupted at runtime, the guest needs
well-defined semantics from the VMM so that the guest at least has a chance of
sanely handling the error. Handling in this case would mean an oops/panic, but
that's far, far better than a random pseudo-#GP that might not even be immediately
logged as a failure.
Powered by blists - more mailing lists