[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zd9hrfJ5xRI6HeZp@google.com>
Date: Wed, 28 Feb 2024 08:39:09 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org, michael.roth@....com,
isaku.yamahata@...el.com, thomas.lendacky@....com
Subject: Re: [PATCH 00/21] TDX/SNP part 1 of n, for 6.9
On Wed, Feb 28, 2024, Paolo Bonzini wrote:
> On Wed, Feb 28, 2024 at 2:25 AM Sean Christopherson <seanjc@...gle.com> wrote:
> > > Michael Roth (2):
> > > KVM: x86: Add gmem hook for invalidating memory
> > > KVM: x86: Add gmem hook for determining max NPT mapping level
> > >
> > > Paolo Bonzini (6):
> > > KVM: x86/mmu: pass error code back to MMU when async pf is ready
> > > KVM: x86/mmu: Use PFERR_GUEST_ENC_MASK to indicate fault is private
> >
> > This doesn't work. The ENC flag gets set on any SNP *capable* CPU, which results
> > in false positives for SEV and SEV-ES guests[*].
>
> You didn't look at the patch did you? :)
Guilty, sort of. I looked (and tested) the patch from the TDX series, but I didn't
look at what you postd. But it's a moot point, because now I did look at what you
posted, and it's still broken :-)
> It does check for has_private_mem (alternatively I could have dropped the bit
> in SVM code for SEV and SEV-ES guests).
The problem isn't with *KVM* setting the bit, it's with *hardware* setting the
bit for SEV and SEV-ES guests. That results in this:
.is_private = vcpu->kvm->arch.has_private_mem && (err & PFERR_GUEST_ENC_MASK),
marking the fault as private. Which, in a vacuum, isn't technically wrong, since
from hardware's perspective the vCPU access was "private". But from KVM's
perspective, SEV and SEV-ES guests don't have private memory, they have memory
that can be *encrypted*, and marking the access as "private" results in violations
of KVM's rules for private memory. Specifically, it results in KVM triggering
emulated MMIO for faults that are marked private, which we want to disallow for
SNP and TDX.
And because the flag only gets set on SNP capable hardware (in my limited testing
of a whole two systems), running the same VM on different hardware would result
in faults being marked private on one system, but not the other. Which means that
KVM can't rely on the flag being set for SEV or SEV-ES guests, i.e. we can't
retroactively enforce anything (not to mention that that might break existing VMs).
Powered by blists - more mailing lists