[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yt8hu/+I8YzVckvU@google.com>
Date: Mon, 25 Jul 2022 16:05:31 -0700
From: David Matlack <dmatlack@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Yosry Ahmed <yosryahmed@...gle.com>,
Mingwei Zhang <mizhang@...gle.com>,
Ben Gardon <bgardon@...gle.com>
Subject: Re: [PATCH v2 2/6] KVM: x86/mmu: Properly account NX huge page
workaround for nonpaging MMUs
On Sat, Jul 23, 2022 at 01:23:21AM +0000, Sean Christopherson wrote:
> Account and track NX huge pages for nonpaging MMUs so that a future
> enhancement to precisely check if shadow page cannot be replaced by a NX
> huge page doesn't get false positives. Without correct tracking, KVM can
> get stuck in a loop if an instruction is fetching and writing data on the
> same huge page, e.g. KVM installs a small executable page on the fetch
> fault, replaces it with an NX huge page on the write fault, and faults
> again on the fetch.
>
> Alternatively, and perhaps ideally, KVM would simply not enforce the
> workaround for nonpaging MMUs. The guest has no page tables to abuse
> and KVM is guaranteed to switch to a different MMU on CR0.PG being
> toggled so there's no security or performance concerns. However, getting
> make_spte() to play nice now and in the future is unnecessarily complex.
>
> In the current code base, make_spte() can enforce the mitigation if TDP
> is enabled or the MMU is indirect, but make_spte() may not always have a
> vCPU/MMU to work with, e.g. if KVM were to support in-line huge page
> promotion when disabling dirty logging.
>
> Without a vCPU/MMU, KVM could either pass in the correct information
> and/or derive it from the shadow page, but the former is ugly and the
> latter subtly non-trivial due to the possitibility of direct shadow pages
> in indirect MMUs. Given that using shadow paging with an unpaged guest
> is far from top priority _and_ has been subjected to the workaround since
> its inception, keep it simple and just fix the accounting glitch.
>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
It's odd that KVM enforced NX Huge Pages but just skipped the accounting.
In retrospect, that was bound to cause some issue.
Aside from the comment suggestion below,
Reviewed-by: David Matlack <dmatlack@...gle.com>
> ---
> arch/x86/kvm/mmu/mmu.c | 2 +-
> arch/x86/kvm/mmu/mmu_internal.h | 8 ++++++++
> arch/x86/kvm/mmu/spte.c | 11 +++++++++++
> 3 files changed, 20 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 1112e3a4cf3e..493cdf1c29ff 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3135,7 +3135,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> continue;
>
> link_shadow_page(vcpu, it.sptep, sp);
> - if (fault->is_tdp && fault->huge_page_disallowed)
> + if (fault->huge_page_disallowed)
> account_nx_huge_page(vcpu->kvm, sp,
> fault->req_level >= it.level);
> }
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index ff4ca54b9dda..83644a0167ab 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -201,6 +201,14 @@ struct kvm_page_fault {
>
> /* Derived from mmu and global state. */
> const bool is_tdp;
> +
> + /*
> + * Note, enforcing the NX huge page mitigation for nonpaging MMUs
> + * (shadow paging, CR0.PG=0 in the guest) is completely unnecessary.
> + * The guest doesn't have any page tables to abuse and is guaranteed
> + * to switch to a different MMU when CR0.PG is toggled on (may not
> + * always be guaranteed when KVM is using TDP). See also make_spte().
> + */
> const bool nx_huge_page_workaround_enabled;
>
> /*
> diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
> index 7314d27d57a4..9f3e5af088a5 100644
> --- a/arch/x86/kvm/mmu/spte.c
> +++ b/arch/x86/kvm/mmu/spte.c
> @@ -147,6 +147,17 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
> if (!prefetch)
> spte |= spte_shadow_accessed_mask(spte);
>
> + /*
> + * For simplicity, enforce the NX huge page mitigation even if not
> + * strictly necessary. KVM could ignore if the mitigation if paging is
> + * disabled in the guest, but KVM would then have to ensure a new MMU
> + * is loaded (or all shadow pages zapped) when CR0.PG is toggled on,
> + * and that's a net negative for performance when TDP is enabled. KVM
> + * could ignore the mitigation if TDP is disabled and CR0.PG=0, as KVM
> + * will always switch to a new MMU if paging is enabled in the guest,
> + * but that adds complexity just to optimize a mode that is anything
> + * but performance critical.
> + */
I had some trouble parsing the last sentence. How about this for slightly
better flow:
/*
* For simplicity, enforce the NX huge page mitigation even if not
* strictly necessary. KVM could ignore if the mitigation if paging is
* disabled in the guest, but KVM would then have to ensure a new MMU
* is loaded (or all shadow pages zapped) when CR0.PG is toggled on,
* and that's a net negative for performance when TDP is enabled. When
* TDP is disabled, KVM will always switch to a new MMU when CR0.PG is
* toggled, but that would tie make_spte() further to vCPU/MMU state
* and add complexity just to optimize a mode that is anything but
* performance critical.
*/
> if (level > PG_LEVEL_4K && (pte_access & ACC_EXEC_MASK) &&
> is_nx_huge_page_enabled(vcpu->kvm)) {
> pte_access &= ~ACC_EXEC_MASK;
> --
> 2.37.1.359.gd136c6c3e2-goog
>
Powered by blists - more mailing lists