[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <cover.1668988357.git.kai.huang@intel.com>
Date: Mon, 21 Nov 2022 13:26:22 +1300
From: Kai Huang <kai.huang@...el.com>
To: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: linux-mm@...ck.org, seanjc@...gle.com, pbonzini@...hat.com,
dave.hansen@...el.com, dan.j.williams@...el.com,
rafael.j.wysocki@...el.com, kirill.shutemov@...ux.intel.com,
ying.huang@...el.com, reinette.chatre@...el.com,
len.brown@...el.com, tony.luck@...el.com, peterz@...radead.org,
ak@...ux.intel.com, isaku.yamahata@...el.com, chao.gao@...el.com,
sathyanarayanan.kuppuswamy@...ux.intel.com, bagasdotme@...il.com,
sagis@...gle.com, imammedo@...hat.com, kai.huang@...el.com
Subject: [PATCH v7 00/20] TDX host kernel support
Intel Trusted Domain Extensions (TDX) protects guest VMs from malicious
host and certain physical attacks. TDX specs are available in [1].
This series is the initial support to enable TDX with minimal code to
allow KVM to create and run TDX guests. KVM support for TDX is being
developed separately[2]. A new "userspace inaccessible memfd" approach
to support TDX private memory is also being developed[3]. The KVM will
only support the new "userspace inaccessible memfd" as TDX guest memory.
This series doesn't aim to support all functionalities (i.e. exposing TDX
module via /sysfs), and doesn't aim to resolve all things perfectly.
Especially, the implementation to how to choose "TDX-usable" memory and
memory hotplug handling is simple, that this series just makes sure all
pages in the page allocator are TDX memory.
A better solution, suggested by Kirill, is similar to the per-node memory
encryption flag in this series [4]. Similarly, a per-node TDX flag can
be added so both "TDX-capable" and "non-TDX-capable" nodes can co-exist.
With exposing the TDX flag to userspace via /sysfs, the userspace can
then use NUMA APIs to bind TDX guests to those "TDX-capable" nodes.
For more information please refer to "Kernel policy on TDX memory" and
"Memory hotplug" sections below. Huang, Ying is working on this
"per-node TDX flag" support and will post another series independently.
(For memory hotplug, sorry for broadcasting widely but I cc'ed the
linux-mm@...ck.org following Kirill's suggestion so MM experts can also
help to provide comments.)
Also, other optimizations will be posted as follow-up once this initial
TDX support is upstreamed.
Hi Dave, Dan, Kirill, Ying (and Intel reviewers),
Please kindly help to review, and I would appreciate reviewed-by or
acked-by tags if the patches look good to you.
This series has been reviewed by Isaku who is developing KVM TDX patches.
Kirill also has reviewed couple of patches as well.
Also, I highly appreciate if anyone else can help to review this series.
----- Changelog history: ------
- v6 -> v7:
- Added memory hotplug support.
- Changed how to choose the list of "TDX-usable" memory regions from at
kernel boot time to TDX module initialization time.
- Addressed comments received in previous versions. (Andi/Dave).
- Improved the commit message and the comments of kexec() support patch,
and the patch handles returnning PAMTs back to the kernel when TDX
module initialization fails. Please also see "kexec()" section below.
- Changed the documentation patch accordingly.
- For all others please see individual patch changelog history.
- v5 -> v6:
- Removed ACPI CPU/memory hotplug patches. (Intel internal discussion)
- Removed patch to disable driver-managed memory hotplug (Intel
internal discussion).
- Added one patch to introduce enum type for TDX supported page size
level to replace the hard-coded values in TDX guest code (Dave).
- Added one patch to make TDX depends on X2APIC being enabled (Dave).
- Added one patch to build all boot-time present memory regions as TDX
memory during kernel boot.
- Added Reviewed-by from others to some patches.
- For all others please see individual patch changelog history.
- v4 -> v5:
This is essentially a resent of v4. Sorry I forgot to consult
get_maintainer.pl when sending out v4, so I forgot to add linux-acpi
and linux-mm mailing list and the relevant people for 4 new patches.
There are also very minor code and commit message update from v4:
- Rebased to latest tip/x86/tdx.
- Fixed a checkpatch issue that I missed in v4.
- Removed an obsoleted comment that I missed in patch 6.
- Very minor update to the commit message of patch 12.
For other changes to individual patches since v3, please refer to the
changelog histroy of individual patches (I just used v3 -> v5 since
there's basically no code change to v4).
- v3 -> v4 (addressed Dave's comments, and other comments from others):
- Simplified SEAMRR and TDX keyID detection.
- Added patches to handle ACPI CPU hotplug.
- Added patches to handle ACPI memory hotplug and driver managed memory
hotplug.
- Removed tdx_detect() but only use single tdx_init().
- Removed detecting TDX module via P-SEAMLDR.
- Changed from using e820 to using memblock to convert system RAM to TDX
memory.
- Excluded legacy PMEM from TDX memory.
- Removed the boot-time command line to disable TDX patch.
- Addressed comments for other individual patches (please see individual
patches).
- Improved the documentation patch based on the new implementation.
- V2 -> v3:
- Addressed comments from Isaku.
- Fixed memory leak and unnecessary function argument in the patch to
configure the key for the global keyid (patch 17).
- Enhanced a little bit to the patch to get TDX module and CMR
information (patch 09).
- Fixed an unintended change in the patch to allocate PAMT (patch 13).
- Addressed comments from Kevin:
- Slightly improvement on commit message to patch 03.
- Removed WARN_ON_ONCE() in the check of cpus_booted_once_mask in
seamrr_enabled() (patch 04).
- Changed documentation patch to add TDX host kernel support materials
to Documentation/x86/tdx.rst together with TDX guest staff, instead
of a standalone file (patch 21)
- Very minor improvement in commit messages.
- RFC (v1) -> v2:
- Rebased to Kirill's latest TDX guest code.
- Fixed two issues that are related to finding all RAM memory regions
based on e820.
- Minor improvement on comments and commit messages.
v6:
https://lore.kernel.org/linux-mm/cover.1666824663.git.kai.huang@intel.com/T/
v5:
https://lore.kernel.org/lkml/cover.1655894131.git.kai.huang@intel.com/T/
v3:
https://lore.kernel.org/lkml/68484e168226037c3a25b6fb983b052b26ab3ec1.camel@intel.com/T/
V2:
https://lore.kernel.org/lkml/cover.1647167475.git.kai.huang@intel.com/T/
RFC (v1):
https://lore.kernel.org/all/e0ff030a49b252d91c789a89c303bb4206f85e3d.1646007267.git.kai.huang@intel.com/T/
== Background ==
TDX introduces a new CPU mode called Secure Arbitration Mode (SEAM)
and a new isolated range pointed by the SEAM Ranger Register (SEAMRR).
A CPU-attested software module called 'the TDX module' runs in the new
isolated region as a trusted hypervisor to create/run protected VMs.
TDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to
provide crypto-protection to the VMs. TDX reserves part of MKTME KeyIDs
as TDX private KeyIDs, which are only accessible within the SEAM mode.
TDX is different from AMD SEV/SEV-ES/SEV-SNP, which uses a dedicated
secure processor to provide crypto-protection. The firmware runs on the
secure processor acts a similar role as the TDX module.
The host kernel communicates with SEAM software via a new SEAMCALL
instruction. This is conceptually similar to a guest->host hypercall,
except it is made from the host to SEAM software instead.
Before being able to manage TD guests, the TDX module must be loaded
and properly initialized. This series assumes the TDX module is loaded
by BIOS before the kernel boots.
How to initialize the TDX module is described at TDX module 1.0
specification, chapter "13.Intel TDX Module Lifecycle: Enumeration,
Initialization and Shutdown".
== Design Considerations ==
1. Initialize the TDX module at runtime
There are basically two ways the TDX module could be initialized: either
in early boot, or at runtime before the first TDX guest is run. This
series implements the runtime initialization.
This series adds a function tdx_enable() to allow the caller to initialize
TDX at runtime:
if (tdx_enable())
goto no_tdx;
// TDX is ready to create TD guests.
This approach has below pros:
1) Initializing the TDX module requires to reserve ~1/256th system RAM as
metadata. Enabling TDX on demand allows only to consume this memory when
TDX is truly needed (i.e. when KVM wants to create TD guests).
2) SEAMCALL requires CPU being already in VMX operation (VMXON has been
done). So far, KVM is the only user of TDX, and it already handles VMXON.
Letting KVM to initialize TDX avoids handling VMXON in the core kernel.
3) It is more flexible to support "TDX module runtime update" (not in
this series). After updating to the new module at runtime, kernel needs
to go through the initialization process again.
2. CPU hotplug
TDX doesn't support physical (ACPI) CPU hotplug. A non-buggy BIOS should
never support hotpluggable CPU devicee and/or deliver ACPI CPU hotplug
event to the kernel. This series doesn't handle physical (ACPI) CPU
hotplug at all but depends on the BIOS to behave correctly.
Note TDX works with CPU logical online/offline, thus this series still
allows to do logical CPU online/offline.
3. Kernel policy on TDX memory
The TDX module reports a list of "Convertible Memory Region" (CMR) to
indicate which memory regions are TDX-capable. The TDX architecture
allows the VMM to designate specific convertible memory regions as usable
for TDX private memory.
The initial support of TDX guests will only allocate TDX private memory
from the global page allocator. This series chooses to designate _all_
system RAM in the core-mm at the time of initializing TDX module as TDX
memory to guarantee all pages in the page allocator are TDX pages.
4. Memory Hotplug
After the kernel passes all "TDX-usable" memory regions to the TDX
module, the set of "TDX-usable" memory regions are fixed during module's
runtime. No more "TDX-usable" memory can be added to the TDX module
after that.
To achieve above "to guarantee all pages in the page allocator are TDX
pages", this series simply choose to reject any non-TDX-usable memory in
memory hotplug.
This _will_ be enhanced in the future after first submission. The
direction we are heading is to allow adding/onlining non-TDX memory to
separate NUMA nodes so that both "TDX-capable" nodes and "TDX-capable"
nodes can co-exist. The TDX flag can be exposed to userspace via /sysfs
so userspace can bind TDX guests to "TDX-capable" nodes via NUMA ABIs.
Note TDX assumes convertible memory is always physically present during
machine's runtime. A non-buggy BIOS should never support hot-removal of
any convertible memory. This implementation doesn't handle ACPI memory
removal but depends on the BIOS to behave correctly.
5. Kexec()
There are two problems in terms of using kexec() to boot to a new kernel
when the old kernel has enabled TDX: 1) Part of the memory pages are
still TDX private pages (i.e. metadata used by the TDX module, and any
TDX guest memory if kexec() happens when there's any TDX guest alive).
2) There might be dirty cachelines associated with TDX private pages.
Just like SME, TDX hosts require special cache flushing before kexec().
Similar to SME handling, the kernel uses wbinvd() to flush cache in
stop_this_cpu() when TDX is enabled.
This series doesn't convert all TDX private pages back to normal due to
below considerations:
1) The kernel doesn't have existing infrastructure to track which pages
are TDX private pages.
2) The number of TDX private pages can be large, and converting all of
them (cache flush + using MOVDIR64B to clear the page) in kexec() can
be time consuming.
3) The new kernel will almost only use KeyID 0 to access memory. KeyID
0 doesn't support integrity-check, so it's OK.
4) The kernel doesn't (and may never) support MKTME. If any 3rd party
kernel ever supports MKTME, it should do MOVDIR64B to clear the page
with the new MKTME KeyID (just like TDX does) before using it.
Also, if the old kernel ever enables TDX, the new kernel cannot use TDX
again. When the new kernel goes through the TDX module initialization
process it will fail immediately at the first step.
Ideally, it's better to shutdown the TDX module in kexec(), but there's
no guarantee that CPUs are in VMX operation in kexec() so just leave the
module open.
== Reference ==
[1]: TDX specs
https://software.intel.com/content/www/us/en/develop/articles/intel-trust-domain-extensions.html
[2]: KVM TDX basic feature support
https://lore.kernel.org/lkml/CAAhR5DFrwP+5K8MOxz5YK7jYShhaK4A+2h1Pi31U_9+Z+cz-0A@mail.gmail.com/T/
[3]: KVM: mm: fd-based approach for supporting KVM
https://lore.kernel.org/lkml/20220915142913.2213336-1-chao.p.peng@linux.intel.com/T/
[4]: per-node memory encryption flag
https://lore.kernel.org/linux-mm/20221007155323.ue4cdthkilfy4lbd@box.shutemov.name/t/
Kai Huang (20):
x86/tdx: Define TDX supported page sizes as macros
x86/virt/tdx: Detect TDX during kernel boot
x86/virt/tdx: Disable TDX if X2APIC is not enabled
x86/virt/tdx: Add skeleton to initialize TDX on demand
x86/virt/tdx: Implement functions to make SEAMCALL
x86/virt/tdx: Shut down TDX module in case of error
x86/virt/tdx: Do TDX module global initialization
x86/virt/tdx: Do logical-cpu scope TDX module initialization
x86/virt/tdx: Get information about TDX module and TDX-capable memory
x86/virt/tdx: Use all system memory when initializing TDX module as
TDX memory
x86/virt/tdx: Add placeholder to construct TDMRs to cover all TDX
memory regions
x86/virt/tdx: Create TDMRs to cover all TDX memory regions
x86/virt/tdx: Allocate and set up PAMTs for TDMRs
x86/virt/tdx: Set up reserved areas for all TDMRs
x86/virt/tdx: Reserve TDX module global KeyID
x86/virt/tdx: Configure TDX module with TDMRs and global KeyID
x86/virt/tdx: Configure global KeyID on all packages
x86/virt/tdx: Initialize all TDMRs
x86/virt/tdx: Flush cache in kexec() when TDX is enabled
Documentation/x86: Add documentation for TDX host support
Documentation/x86/tdx.rst | 181 +++-
arch/x86/Kconfig | 15 +
arch/x86/Makefile | 2 +
arch/x86/coco/tdx/tdx.c | 6 +-
arch/x86/include/asm/tdx.h | 30 +
arch/x86/kernel/process.c | 8 +-
arch/x86/mm/init_64.c | 10 +
arch/x86/virt/Makefile | 2 +
arch/x86/virt/vmx/Makefile | 2 +
arch/x86/virt/vmx/tdx/Makefile | 2 +
arch/x86/virt/vmx/tdx/seamcall.S | 52 ++
arch/x86/virt/vmx/tdx/tdx.c | 1422 ++++++++++++++++++++++++++++++
arch/x86/virt/vmx/tdx/tdx.h | 118 +++
arch/x86/virt/vmx/tdx/tdxcall.S | 19 +-
14 files changed, 1852 insertions(+), 17 deletions(-)
create mode 100644 arch/x86/virt/Makefile
create mode 100644 arch/x86/virt/vmx/Makefile
create mode 100644 arch/x86/virt/vmx/tdx/Makefile
create mode 100644 arch/x86/virt/vmx/tdx/seamcall.S
create mode 100644 arch/x86/virt/vmx/tdx/tdx.c
create mode 100644 arch/x86/virt/vmx/tdx/tdx.h
base-commit: 00e07cfbdf0b232f7553f0175f8f4e8d792f7e90
--
2.38.1
Powered by blists - more mailing lists