[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <cover.1659854957.git.isaku.yamahata@intel.com>
Date: Sun, 7 Aug 2022 15:18:33 -0700
From: isaku.yamahata@...el.com
To: kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Cc: isaku.yamahata@...el.com, isaku.yamahata@...il.com,
Paolo Bonzini <pbonzini@...hat.com>, erdemaktas@...gle.com,
Sean Christopherson <seanjc@...gle.com>,
Sagi Shahar <sagis@...gle.com>
Subject: [RFC PATCH 00/13] KVM TDX: TDP MMU: large page support
From: Isaku Yamahata <isaku.yamahata@...el.com>
This patch series is based on "v8 KVM TDX: basic feature support". It
implements large page support for TDP MMU by allowing populating of the large
page and splitting it when necessary. It's not supported to merge 4K/2M pages
into 2M/1G pages.
Feedback for options to merge sub-pages into a large page are welcome.
Options for merging sub-pages into a large page
===============================================
A) Merge pages into a large page on scanning by NX page recovery daemon actively.
+ implementation would be simple
- inefficient as implementation because it always scans subpages.
- inefficient because it merges unused pages.
B) On normal EPT violation, check whether pages can be merged into a large page
after mapping it.
+ scanning part isn't needed.
- inefficient to add more logic to a fast path
C) Use TDH.MEM.RANGE.BLOCK instead of zapping EPT entry. And record that the
entry is blocked. On EPT violation, check if the entry is blocked or not.
If the EPT violation is caused by a blocked Secure-EPT entry, trigger the
page merge logic.
+ reuse scanning logic (NX recovery daemon)
+ take advantage of EPT violation
- would be complex
Block instead of zap, track blocked Secure-EPT entry, and unblock it on the
EPT violation and then page merge logic.
The current implementation (splitting large pages when necessary)
=================================================================
* It already tracking whether GFN is private or shared. When it's changed,
update lpage_info to prevent a large page.
* TDX provides page level on Secure EPT violation. Pass around the page level
that the lower level functions needs.
* Even if the page is the large page in the host, at the EPT level, only some
sub-pages are mapped. In such cases abandon to map large pages and step into
the sub-page level, unlike the conventional EPT.
* When zapping spte and the spte is for a large page, split and zap it unlike
the conventional EPT because otherwise the protected page contents will be
lost.
* It's not implemented to merge pages into a large page.
Discussion for merging pages into large page
============================================
The live migration support for TDX is planned. It means dirty page logging will
be supported and a large page will be split on enabling dirty page logging.
After disabling it, the pages should be merged into large pages for performance.
The current implementation for the conventional EPT is
* THP or NX page recovery zaps EPT entries. This step doesn't directly map a
large page.
* On the next EPT violation, when a large page is possible, map it as a large
page.
This is because
* To avoid unnecessary page merging for cold SPTE by mapping large pages on EPT
violation. This is desirable for the TDX case to avoid unnecessary Secure-EPT
operation.
* Reuse KVM page fault path.
For TDX, the new logic is needed to merge sub-pages into a large page.
TDX operation
-------------
* EPT violation trick
Such track (zapping the EPT entry to trigger EPT violation) doesn't work for
TDX. For TDX, it will lose the contents of the protected page to zap a page
because the protected guest page is un-associated from the guest TD. Instead,
TDX provides a different way to trigger EPT violation without losing the page
contents so that VMM can detect guest TD activity by blocking/unblocking
Secure-EPT entry. TDH.MEM.RANGE.BLOCK and TDH.MEM.RANGE.UNBLOCK. They
correspond to clearing/setting a present bit in an EPT entry with page contents
still kept. By TDH.MEM.RANGE.BLOCK and TLB shoot down, VMM can cause guest TD
to trigger EPT violation. After that, VMM can unblock it by
TDH.MEM.RANGE.UNBLOCK and resume guest TD execution. The procedure is as
follows.
- Block Secure-EPT entry by TDH.MEM.RANGE.BLOCK.
- TLB shoot down.
- Wait for guest TD to trigger EPT violation.
- Unblock Secure-EPT entry by TDH.MEM.RANGE.UNBLOCK to resume the guest TD.
* merging sub-pages into a large page
The following steps are needed.
- Ensure that all sub-pages are mapped.
- TLB shoot down.
- Merge sub-pages into a large page (TDH.MEM.PAGE.PROMOTE).
This requires all sub-pages are mapped.
- Cache flush Secure EPT page used to map subpages.
Thanks,
Chao Peng (1):
KVM: Update lpage info when private/shared memory are mixed
Xiaoyao Li (12):
KVM: TDP_MMU: Go to next level if smaller private mapping exists
KVM: TDX: Pass page level to cache flush before TDX SEAMCALL
KVM: TDX: Pass KVM page level to tdh_mem_page_add() and
tdh_mem_page_aug()
KVM: TDX: Pass size to tdx_measure_page()
KVM: TDX: Pass size to reclaim_page()
KVM: TDX: Update tdx_sept_{set,drop}_private_spte() to support large
page
KVM: MMU: Introduce level info in PFERR code
KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs
KVM: MMU: Pass desired page level in err code for page fault handler
KVM: TDP_MMU: Split the large page when zap leaf
KVM: TDX: Split a large page when 4KB page within it converted to
shared
KVM: x86: remove struct kvm_arch.tdp_max_page_level
arch/x86/include/asm/kvm_host.h | 14 ++-
arch/x86/kvm/mmu/mmu.c | 158 ++++++++++++++++++++++++++++-
arch/x86/kvm/mmu/mmu_internal.h | 4 +-
arch/x86/kvm/mmu/tdp_mmu.c | 31 +++++-
arch/x86/kvm/vmx/common.h | 6 +-
arch/x86/kvm/vmx/tdx.c | 174 +++++++++++++++++++++-----------
arch/x86/kvm/vmx/tdx_arch.h | 20 ++++
arch/x86/kvm/vmx/tdx_ops.h | 46 ++++++---
arch/x86/kvm/vmx/vmx.c | 2 +-
include/linux/kvm_host.h | 10 ++
virt/kvm/kvm_main.c | 9 +-
11 files changed, 390 insertions(+), 84 deletions(-)
--
2.25.1
Powered by blists - more mailing lists