[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZeCqnq7dLcJI41O9@google.com>
Date: Thu, 29 Feb 2024 08:02:38 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org, michael.roth@....com,
isaku.yamahata@...el.com, thomas.lendacky@....com,
Ashish Kalra <ashish.kalra@....com>
Subject: Re: [PATCH 10/21] KVM: SEV: Use a VMSA physical address variable for
populating VMCB
On Wed, Feb 28, 2024, Paolo Bonzini wrote:
> On Wed, Feb 28, 2024 at 3:00 AM Sean Christopherson <seanjc@...gle.com> wrote:
> >
> > On Tue, Feb 27, 2024, Paolo Bonzini wrote:
> > > From: Tom Lendacky <thomas.lendacky@....com>
> > >
> > > In preparation to support SEV-SNP AP Creation, use a variable that holds
> > > the VMSA physical address rather than converting the virtual address.
> > > This will allow SEV-SNP AP Creation to set the new physical address that
> > > will be used should the vCPU reset path be taken.
> >
> > No, this patch belongs in the SNP series. The hanlding of vmsa_pa is broken
> > (KVM leaks the page set by the guest; I need to follow-up in the SNP series).
> > On top of that, I detest duplicat variables, and I don't like that KVM keeps its
> > original VMSA (kernel allocation) after the guest creates its own.
> >
> > I can't possibly imagine why this needs to be pulled in early. There's no way
> > TDX needs this, and while this patch is _small_, the functional change it leads
> > to is not.
>
> Well, the point of this series (and there will be more if you agree)
> is exactly to ask "why not" in a way that is more manageable than
> through the huge TDX and SNP series. My reading of the above is that
> you believe this is small enough that it can even be merged with "KVM:
> SEV: Support SEV-SNP AP Creation NAE event" (with fixes), which I
> don't disagree with.
Maybe? That wasn't my point.
> Otherwise, if the approach was good there's no reason _not_ to get it
> in early. It's just a refactoring.
It's not really a refactoring though, that's why I'm objecting. If this patch
stored _just_ the physical adddress of the VMSA, then I would consider it a
refactoring and would have no problem applying it earlier.
But this patch adds a second, 100% duplicate field (as of now), and the reason
it does so is to allow "svm->sev_es.vmsa" to become disconnected from the "real"
VMSA that is used by hardware, which is all kinds of messed up. That's what I
meant by "the functional change it leads to is not (small)".
> Talking in general: I think I agree about keeping the gmem parts in a
> kvm-coco-queue branch (and in the meanwhile involving the mm people if
> mm/filemap.c changes are needed). #VE too, probably, but what I
> _really_ want to avoid is that these series (the plural is not a typo)
> become a new bottleneck for everybody. Basically these are meant to be
> a "these seem good to go to me, please confirm or deny" between
> comaintainers more than a real patch posting; having an extra branch
> is extra protection against screwups but we should be mindful that
> force pushes are painful for everyone.
Yes, which is largely why I suggested we separate the gmem. I suspect we'll need
to force push to fixup gmem things, whereas I'm confident the other prep work won't
need to be tweaked once it's fully reviewed.
For the other stuff, specifically to avoid creating another bottleneck, my preference
is to follow the "normal" rules for posting patches, with slightly relaxed bundling
rules. I.e. post multiple, independent series so that they can be reviewed,
iterated upon, and applied like any other series.
E.g. my objection to this VMSA tracking patch shouldn't get in the way of the MMU
changes, the #VE patch shoudln't interfere with the vmx/main.c patch, etc. In
other words, throwing everything into a kitchen sink "TDX/SNP prep work" series
just creates another (smaller) bottleneck.
I am 100% in favor of applying prep patches in advance of the larger SNP and TDX
series. That's actually partly why I ended up posting my series that includes
the PFERR_PRIVATE_ACCESS patch; I was trying to pull in using PFERR_GUEST_ENC_MASK
and some of the other "simple" patches, and the darn thing failed on me.
Powered by blists - more mailing lists