[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YkIi0+O4BlWu2sBF@google.com>
Date: Mon, 28 Mar 2022 21:04:19 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Nikunj A Dadhania <nikunj@....com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Brijesh Singh <brijesh.singh@....com>,
Tom Lendacky <thomas.lendacky@....com>,
Peter Gonda <pgonda@...gle.com>,
Bharata B Rao <bharata@....com>,
"Maciej S . Szmigiero" <mail@...iej.szmigiero.name>,
Mingwei Zhang <mizhang@...gle.com>,
David Hildenbrand <david@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC v1 2/9] KVM: x86/mmu: Move hugepage adjust to
direct_page_fault
On Tue, Mar 08, 2022, Nikunj A Dadhania wrote:
> Both TDP MMU and legacy MMU do hugepage adjust in the mapping routine.
> Adjust the pfn early in the common code. This will be used by the
> following patches for pinning the pages.
>
> No functional change intended.
There is a functional change here, as kvm_mmu_hugepage_adjust() is now called
without mmu_lock being held. That really shouldn't be problematic, but sadly KVM
very, very subtly relies on calling lookup_address_in_mm() while holding mmu_lock
_and_ after checking mmu_notifier_retry_hva().
https://lore.kernel.org/all/CAL715WL7ejOBjzXy9vbS_M2LmvXcC-CxmNr+oQtCZW0kciozHA@mail.gmail.com
> Signed-off-by: Nikunj A Dadhania <nikunj@....com>
> ---
> arch/x86/kvm/mmu/mmu.c | 4 ++--
> arch/x86/kvm/mmu/tdp_mmu.c | 2 --
> 2 files changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 8e24f73bf60b..db1feecd6fed 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2940,8 +2940,6 @@ static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> int ret;
> gfn_t base_gfn = fault->gfn;
>
> - kvm_mmu_hugepage_adjust(vcpu, fault);
> -
> trace_kvm_mmu_spte_requested(fault);
> for_each_shadow_entry(vcpu, fault->addr, it) {
> /*
> @@ -4035,6 +4033,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>
> r = RET_PF_RETRY;
>
> + kvm_mmu_hugepage_adjust(vcpu, fault);
> +
> if (is_tdp_mmu_fault)
> read_lock(&vcpu->kvm->mmu_lock);
> else
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index bc9e3553fba2..e03bf59b2f81 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -959,8 +959,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> u64 new_spte;
> int ret;
>
> - kvm_mmu_hugepage_adjust(vcpu, fault);
> -
> trace_kvm_mmu_spte_requested(fault);
>
> rcu_read_lock();
> --
> 2.32.0
>
Powered by blists - more mailing lists