[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <11ccff55a5fbb52d59c36c3dbb88c8d8d937b60e.camel@intel.com>
Date: Wed, 9 Nov 2022 12:46:11 +0000
From: "Huang, Kai" <kai.huang@...el.com>
To: "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Yamahata, Isaku" <isaku.yamahata@...el.com>
CC: "pbonzini@...hat.com" <pbonzini@...hat.com>,
"Shahar, Sagi" <sagis@...gle.com>,
"Aktas, Erdem" <erdemaktas@...gle.com>,
"isaku.yamahata@...il.com" <isaku.yamahata@...il.com>,
"dmatlack@...gle.com" <dmatlack@...gle.com>,
"Christopherson,, Sean" <seanjc@...gle.com>
Subject: Re: [PATCH v10 036/108] KVM: TDX: Enable mmio spte caching always for
TDX
On Sat, 2022-10-29 at 23:22 -0700, isaku.yamahata@...el.com wrote:
> From: Isaku Yamahata <isaku.yamahata@...el.com>
>
> TDX needs to set shared spte for MMIO GFN to !SUPPRES_VE_BIT | !RWX so that
> guest TD can get #VE and then issue TDG.VP.VMCALL<MMIO>. Enable mmio
> caching always for TDX irrelevant the module parameter enable_mmio_caching.
>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
> ---
> arch/x86/kvm/mmu/mmu.c | 3 ++-
> arch/x86/kvm/mmu/spte.h | 2 +-
> arch/x86/kvm/mmu/tdp_mmu.c | 7 +++++++
> 3 files changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 0d3fa29ccccc..9098f77cdaa4 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3229,7 +3229,8 @@ static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fau
> * and only if L1's MAXPHYADDR is inaccurate with respect to
> * the hardware's).
> */
> - if (unlikely(!enable_mmio_caching) ||
> + if (unlikely(!enable_mmio_caching &&
> + !kvm_gfn_shared_mask(vcpu->kvm)) ||
> unlikely(fault->gfn > kvm_mmu_max_gfn()))
> return RET_PF_EMULATE;
> }
> diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
> index 82f0d5c08b77..fecfdcb5f321 100644
> --- a/arch/x86/kvm/mmu/spte.h
> +++ b/arch/x86/kvm/mmu/spte.h
> @@ -244,7 +244,7 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
> static inline bool is_mmio_spte(struct kvm *kvm, u64 spte)
> {
> return (spte & shadow_mmio_mask) == kvm->arch.shadow_mmio_value &&
> - likely(enable_mmio_caching);
> + likely(enable_mmio_caching || kvm_gfn_shared_mask(kvm));
> }
Sean suggested we can change to treat "mmio_value == 0" also as MMIO caching
being enabled:
https://lore.kernel.org/lkml/244f619a4e7a1c7079830d12379872a111da418d.camel@intel.com/
>
> static inline bool is_shadow_present_pte(u64 pte)
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index e07f14351d14..3325633b1cb5 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -1875,6 +1875,13 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes,
>
> *root_level = vcpu->arch.mmu->root_role.level;
>
> + /*
> + * mmio page fault isn't supported for protected guest because
> + * instructions in protected guest memory can't be parsed by VMM.
> + */
> + if (WARN_ON_ONCE(kvm_gfn_shared_mask(vcpu->kvm)))
> + return leaf;
> +
It's weird to put this here.
I think the logic is, for TDX guest (or similar protected guests),
handle_mmio_page_fault() should not be reached at all. So I think we can just
add a WARN() against kvm_gfn_shared_mask(), or is_tdx_vm() at the very beginning
of handle_mmio_page_fault()?
> tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {
> leaf = iter.level;
> sptes[leaf] = iter.old_spte;
Powered by blists - more mailing lists