[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8b40f8b1d1fa915116ef1c95a13db0e55d3d91f2.camel@intel.com>
Date: Tue, 9 Apr 2024 14:46:44 +0000
From: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
To: "seanjc@...gle.com" <seanjc@...gle.com>
CC: "davidskidmore@...gle.com" <davidskidmore@...gle.com>, "Li, Xiaoyao"
<xiaoyao.li@...el.com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "srutherford@...gle.com"
<srutherford@...gle.com>, "pankaj.gupta@....com" <pankaj.gupta@....com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>, "Yamahata, Isaku"
<isaku.yamahata@...el.com>, "Wang, Wei W" <wei.w.wang@...el.com>
Subject: Re: [ANNOUNCE] PUCK Notes - 2024.04.03 - TDX Upstreaming Strategy
On Mon, 2024-04-08 at 18:37 -0700, Sean Christopherson wrote:
> > > Is guest.MAXPHYADDR one of those? If so, use that.
> >
> > No it is not configurable. I'm looking into make it configurable, but it is
> > not
> > likely to happen before we were hoping to get basic support upstream.
>
> Yeah, love me some hardware defined software.
>
> > An alternative would be to have the KVM API peak at the value, and then
> > discard it (not pass the leaf value to the TDX module). Not ideal.
>
> Heh, I typed up this idea before reading ahead. This has my vote. Unless I'm
> misreading where things are headed, using guest.MAXPHYADDR to communicate what
> is essentially GPAW to the guest is about to become the de facto standard.
>
> At that point, KVM can basically treat the current TDX module behavior as an
> erratum, i.e. discarding guest.MAXPHYADDR becomes a workaround for a "CPU"
> bug,
> not some goofy KVM quirk.
Makes sense. I'd like to get to the point where we can say it's for sure coming.
Hopefully will happen soon.
> >
[snip]
> > >
>
> As I said in PUCK (and recorded in the notes), the fixed values should be
> provided
> in a data format that is easily consumed by C code, so that KVM can report
> that
> to userspace with
Right, I thought I heard this on the call, and to use the upper bits of that
leaf for GPAW. What has changed since then is a little more learning on the TDX
module behavior around CPUID bits.
The runtime API doesn't provide what the fixed values actually are, but per the
TDX module folks, which bits are fixed and what the values are could change
without an opt-in. This begged the questions for me of what exactly KVM should
expect of TDX module backwards compatibility and what SW is expected to actually
do with that JSON file. I'm still trying to track that down. Long term we need
the TDX module to expose an interface to provide more info about the CPUID
leafs, and those discussions are just starting.
If KVM needs to expose the values of the fixed leafs today (doesn't seem like a
bad idea, but I'm still not clear of the exact consumption), then most would
have to be exposed as "unknown", or something like that.
>
> > So the current interface won't allow us to perfectly match the
> > KVM_GET_SUPPORTED_CPUID/KVM_SET_CPUID. Even excluding the vm-scoped vs vcpu-
> > scoped differences. However, we could try to match the general design a
> > little better.
>
> No, don't try to match KVM_GET_SUPPORTED_CPUID, it's a terrible API that no
> one
> likes. The only reason we haven't replaced is because no one has come up with
> a
> universally better idea. For feature flags, communicating what KVM supports
> is
> straightforward, mostly. But for things like topology, communicating exactly
> what
> KVM "supports" is much more difficult.
>
> The TDX fixed bits are very different. It's the TDX module, and thus KVM,
> saying
> "here are the bits that you _must_ set to these exact values".
Right, we would need like a KVM_GET_SUPPORTED_CPUID_ON,
KVM_GET_SUPPORTED_CPUID_OFF and KVM_GET_SUPPORTED_CPUID_OPTIONAL. And still
inherit the KVM_GET_SUPPORTED_CPUID problems for the leafs that are not simple
bits.
>
> > Here we were discussing making gpaw configurable via a dedicated named
> > field,
> > but the suggestion is to instead include it in CPUID bits. The current API
> > takes
> > ATTRIBUTES as a dedicated field too. But there actually are CPUID bits for
> > some
> > of those features. Those CPUID bits are controlled instead via the
> > associated
> > ATTRIBUTES. So we could expose such features via CPUID as well. Userspace
> > would
> > for example, pass the PKS CPUID bit in, and KVM would see it and configure
> > PKS
> > via the ATTRIBUTES bit.
> >
> > So what I was looking to understand is, what is the enthusiasm for generally
> > continuing to use CPUID has the main method for specifying which features
> > should
> > be enabled/virtualized, if we can't match the existing
> > KVM_GET_SUPPORTED_CPUID/KVM_SET_CPUID APIs. Is the hope just to make
> > userspace's
> > code more unified between TDX and normal VMs?
>
> I need to look at the TDX code more to form an (updated) opinion. IIRC, my
> opinion
> from four years ago was to use ATTRIBUTES and then force CPUID to match.
> Whether
> or not that's still my preferred approach probably depends on how many, and
> what,
> things are shoved into attributes.
Thanks. Paolo seemed eager to get the uAPI settled for TDX. Based on that, it's
one of the top priorities for us right now.
Although having userspace upstream before kernel makes me nervous. It caused a
pile of problems for my last project (shadow stack).
Powered by blists - more mailing lists