[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cfeb3b8b02646b073d5355495ec8842ac33aeae5.camel@intel.com>
Date: Thu, 30 Jun 2022 23:37:15 +1200
From: Kai Huang <kai.huang@...el.com>
To: isaku.yamahata@...el.com, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Cc: isaku.yamahata@...il.com, Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>
Subject: Re: [PATCH v7 035/102] KVM: x86/mmu: Explicitly check for MMIO spte
in fast page fault
On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@...el.com wrote:
> From: Sean Christopherson <sean.j.christopherson@...el.com>
>
> Explicitly check for an MMIO spte in the fast page fault flow. TDX will
> use a not-present entry for MMIO sptes, which can be mistaken for an
> access-tracked spte since both have SPTE_SPECIAL_MASK set.
SPTE_SPECIAL_MASK has been removed in latest KVM code. The changelog needs
update.
In fact, if I understand correctly, I don't think this changelog is correct:
The existing code doesn't check is_mmio_spte() because:
1) If MMIO caching is enabled, MMIO fault is always handled in
handle_mmio_page_fault() before reaching here;
2) If MMIO caching is disabled, is_shadow_present_pte() always returns false for
MMIO spte, and is_mmio_spte() also always return false for MMIO spte, so there's
no need check here.
"A non-present entry for MMIO spte" doesn't necessarily mean
is_shadow_present_pte() will return true for it, and there's no explanation at
all that for TDX guest a MMIO spte could reach here and is_shadow_present_pte()
returns true for it.
If this patch is ever needed, it should come with or after the patch (patches)
that handles MMIO fault for TD guest.
Hi Sean, Paolo,
Did I miss anything?
>
> MMIO sptes are handled in handle_mmio_page_fault for non-TDX VMs, so this
> patch does not affect them. TDX will handle MMIO emulation through a
> hypercall instead.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@...el.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
> ---
> arch/x86/kvm/mmu/mmu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 17252f39bd7c..51306b80f47c 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3163,7 +3163,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> else
> sptep = fast_pf_get_last_sptep(vcpu, fault->addr, &spte);
>
> - if (!is_shadow_present_pte(spte))
> + if (!is_shadow_present_pte(spte) || is_mmio_spte(spte))
> break;
>
> sp = sptep_to_sp(sptep);
Powered by blists - more mailing lists