[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aKS7ANG-_EJyEY6U@google.com>
Date: Tue, 19 Aug 2025 10:57:20 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: James Houghton <jthoughton@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, Vipin Sharma <vipinsh@...gle.com>,
David Matlack <dmatlack@...gle.com>, kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 1/7] KVM: x86/mmu: Track TDP MMU NX huge pages separately
On Mon, Jul 07, 2025, James Houghton wrote:
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 4e06e2e89a8fa..f44d7f3acc179 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -65,9 +65,9 @@ int __read_mostly nx_huge_pages = -1;
> static uint __read_mostly nx_huge_pages_recovery_period_ms;
> #ifdef CONFIG_PREEMPT_RT
> /* Recovery can cause latency spikes, disable it for PREEMPT_RT. */
> -static uint __read_mostly nx_huge_pages_recovery_ratio = 0;
> +unsigned int __read_mostly nx_huge_pages_recovery_ratio;
> #else
> -static uint __read_mostly nx_huge_pages_recovery_ratio = 60;
> +unsigned int __read_mostly nx_huge_pages_recovery_ratio = 60;
Spurious changes.
> #endif
>
> static int get_nx_huge_pages(char *buffer, const struct kernel_param *kp);
...
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index db8f33e4de624..a8fd2de13f707 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -413,7 +413,10 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm,
> void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault);
> void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_level);
>
> -void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
> -void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
> +void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp,
> + enum kvm_mmu_type mmu_type);
> +void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp,
> + enum kvm_mmu_type mmu_type);
>
> +extern unsigned int nx_huge_pages_recovery_ratio;
And here as well. I'll fixup when applying.
Powered by blists - more mailing lists