[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ffcc8c550d5ba6122b201d8170b42ee581826d47.camel@intel.com>
Date: Wed, 29 Nov 2023 04:36:12 +0000
From: "Huang, Kai" <kai.huang@...el.com>
To: "kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
"jpiotrowski@...ux.microsoft.com" <jpiotrowski@...ux.microsoft.com>
CC: "tim.gardner@...onical.com" <tim.gardner@...onical.com>,
"cascardo@...onical.com" <cascardo@...onical.com>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
"thomas.lendacky@....com" <thomas.lendacky@....com>,
"roxana.nicolescu@...onical.com" <roxana.nicolescu@...onical.com>,
"haiyangz@...rosoft.com" <haiyangz@...rosoft.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"mingo@...hat.com" <mingo@...hat.com>,
"stable@...r.kernel.org" <stable@...r.kernel.org>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"stefan.bader@...onical.com" <stefan.bader@...onical.com>,
"Cui, Dexuan" <decui@...rosoft.com>,
"nik.borisov@...e.com" <nik.borisov@...e.com>,
"mhkelley58@...il.com" <mhkelley58@...il.com>,
"hpa@...or.com" <hpa@...or.com>,
"peterz@...radead.org" <peterz@...radead.org>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"wei.liu@...nel.org" <wei.liu@...nel.org>,
"bp@...en8.de" <bp@...en8.de>,
"sashal@...nel.org" <sashal@...nel.org>,
"kys@...rosoft.com" <kys@...rosoft.com>,
"x86@...nel.org" <x86@...nel.org>
Subject: Re: [PATCH v1 1/3] x86/tdx: Check for TDX partitioning during early
TDX init
On Fri, 2023-11-24 at 17:19 +0100, Jeremi Piotrowski wrote:
> On 24/11/2023 14:33, Kirill A. Shutemov wrote:
> > On Fri, Nov 24, 2023 at 12:04:56PM +0100, Jeremi Piotrowski wrote:
> > > On 24/11/2023 11:43, Kirill A. Shutemov wrote:
> > > > On Fri, Nov 24, 2023 at 11:31:44AM +0100, Jeremi Piotrowski wrote:
> > > > > On 23/11/2023 14:58, Kirill A. Shutemov wrote:
> > > > > > On Wed, Nov 22, 2023 at 06:01:04PM +0100, Jeremi Piotrowski wrote:
> > > > > > > Check for additional CPUID bits to identify TDX guests running with Trust
> > > > > > > Domain (TD) partitioning enabled. TD partitioning is like nested virtualization
> > > > > > > inside the Trust Domain so there is a L1 TD VM(M) and there can be L2 TD VM(s).
> > > > > > >
> > > > > > > In this arrangement we are not guaranteed that the TDX_CPUID_LEAF_ID is visible
> > > > > > > to Linux running as an L2 TD VM. This is because a majority of TDX facilities
> > > > > > > are controlled by the L1 VMM and the L2 TDX guest needs to use TD partitioning
> > > > > > > aware mechanisms for what's left. So currently such guests do not have
> > > > > > > X86_FEATURE_TDX_GUEST set.
> > > > > > >
> > > > > > > We want the kernel to have X86_FEATURE_TDX_GUEST set for all TDX guests so we
> > > > > > > need to check these additional CPUID bits, but we skip further initialization
> > > > > > > in the function as we aren't guaranteed access to TDX module calls.
> > > > > >
> > > > > > I don't follow. The idea of partitioning is that L2 OS can be
> > > > > > unenlightened and have no idea if it runs indide of TD. But this patch
> > > > > > tries to enumerate TDX anyway.
> > > > > >
> > > > > > Why?
> > > > > >
> > > > >
> > > > > That's not the only idea of partitioning. Partitioning provides different privilege
> > > > > levels within the TD, and unenlightened L2 OS can be made to work but are inefficient.
> > > > > In our case Linux always runs enlightened (both with and without TD partitioning), and
> > > > > uses TDX functionality where applicable (TDX vmcalls, PTE encryption bit).
> > > >
> > > > What value L1 adds in this case? If L2 has to be enlightened just run the
> > > > enlightened OS directly as L1 and ditch half-measures. I think you can
> > > > gain some performance this way.
> > > >
> > >
> > > It's primarily about the privilege separation, performance is a reason
> > > one doesn't want to run unenlightened. The L1 makes the following possible:
> > > - TPM emulation within the trust domain but isolated from the OS
> > > - infrastructure interfaces for things like VM live migration
> > > - support for Virtual Trust Levels[1], Virtual Secure Mode[2]
> > >
> > > These provide a lot of value to users, it's not at all about half-measures.
It's not obvious why the listed things above are related to TDX guest. They
look more like hyperv specific enlightenment which don't have dependency of TDX
guest.
For instance, the "Emulating Hyper-V VSM with KVM" design in your above [2] says
nothing about TDX (or SEV):
https://lore.kernel.org/lkml/20231108111806.92604-34-nsaenz@amazon.com/
> >
> > Hm. Okay.
> >
> > Can we take a step back? What is bigger picture here? What enlightenment
> > do you expect from the guest when everything is in-place?
> >
>
> All the functional enlightenment are already in place in the kernel and
> everything works (correct me if I'm wrong Dexuan/Michael). The enlightenments
> are that TDX VMCALLs are needed for MSR manipulation and vmbus operations,
> encrypted bit needs to be manipulated in the page tables and page
> visibility propagated to VMM.
Not quite family with hyperv enlightenments, but are these enlightenments TDX
guest specific? Because if they are not, then they should be able to be
emulated by the normal hyperv, thus the hyperv as L1 (which is TDX guest) can
emulate them w/o letting the L2 know the hypervisor it runs on is actually a TDX
guest.
Btw, even if there's performance concern here, as you mentioned the TDVMCALL is
actually made to the L0 which means L0 must be aware such VMCALL is from L2 and
needs to be injected to L1 to handle, which IMHO not only complicates the L0 but
also may not have any performance benefits.
>
> Whats missing is the tdx_guest flag is not exposed to userspace in /proc/cpuinfo,
> and as a result dmesg does not currently display:
> "Memory Encryption Features active: Intel TDX".
>
> That's what I set out to correct.
>
> > So far I see that you try to get kernel think that it runs as TDX guest,
> > but not really. This is not very convincing model.
> >
>
> No that's not accurate at all. The kernel is running as a TDX guest so I
> want the kernel to know that.
>
But it isn't. It runs on a hypervisor which is a TDX guest, but this doesn't
make itself a TDX guest.
> TDX is not a monolithic thing, it has different
> features that can be in-use and it has differences in behavior when running
> with TD partitioning (example: no #VE/TDX module calls). So those differences
> need to be clearly modeled in code.
Well IMHO this is a design choice but not a fact. E.g., if we never sets
TDX_GUEST flag for L2 then it naturally doesn't use TDX guest related staff.
Otherwise we need additional patches like your patch 2/3 in this series to stop
the L2 to use certain TDX functionality.
And I guess we will need more patches to stop L2 from doing TDX guest things.
E.g., we might want to disable TDX attestation interface support in the L2
guest, because the L2 is indeed not a TDX guest..
So to me even there's value to advertise L2 as TDX guest, the pros/cons need to
be evaluated to see whether it is worth.
>
> > Why does L2 need to know if it runs under TDX or SEV? Can't it just think
> > it runs as Hyper-V guest and all difference between TDX and SEV abstracted
> > by L1?
> >
>
> If you look into the git history you'll find this was attempted with
> CC_VENDOR_HYPERV. That proved to be a dead end as some things just can't be
> abstracted (GHCI vs GHCB; the encrypted bit works differently). What resulted
> was a ton of conditionals and duplication. After long discussions with Borislav
> we converged on clearly identifying with the underlying technology (SEV/TDX)
> and being explicit about support for optional parts in each scheme (like vTOM).
Can you provide more background? For instance, why does the L2 needs to know
the encrypted bit that is in *L1*?
Powered by blists - more mailing lists