[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5f8c0ca4-ae99-4d1c-8525-51c6f1096eaa@redhat.com>
Date: Wed, 14 Aug 2024 19:57:57 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Peter Gonda <pgonda@...gle.com>, Michael Roth <michael.roth@....com>,
Vishal Annapurve <vannapurve@...gle.com>,
Ackerly Tng <ackerleytng@...gle.com>
Subject: Re: [PATCH 22/22] KVM: x86/mmu: Detect if unprotect will do anything
based on invalid_list
On 8/9/24 21:03, Sean Christopherson wrote:
> Explicitly query the list of to-be-zapped shadow pages when checking to
> see if unprotecting a gfn for retry has succeeded, i.e. if KVM should
> retry the faulting instruction.
>
> Add a comment to explain why the list needs to be checked before zapping,
> which is the primary motivation for this change.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
> arch/x86/kvm/mmu/mmu.c | 11 +++++++----
> 1 file changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 300a47801685..50695eb2ee22 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2731,12 +2731,15 @@ bool __kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
> goto out;
> }
>
> - r = false;
> write_lock(&kvm->mmu_lock);
> - for_each_gfn_valid_sp_with_gptes(kvm, sp, gpa_to_gfn(gpa)) {
> - r = true;
> + for_each_gfn_valid_sp_with_gptes(kvm, sp, gpa_to_gfn(gpa))
> kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
> - }
> +
> + /*
> + * Snapshot the result before zapping, as zapping will remove all list
> + * entries, i.e. checking the list later would yield a false negative.
> + */
Hmm, the comment is kinda overkill? Maybe just
/* Return whether there were sptes to zap. */
r = !list_empty(&invalid_test);
I'm not sure about patch 21 - I like the simple kvm_mmu_unprotect_page()
function. Maybe rename it to kvm_mmu_zap_gfn() and make it static in
the same patch?
Either way, this small cleanup applies even if the function is not inlined.
Thanks,
Paolo
> + r = !list_empty(&invalid_list);
> kvm_mmu_commit_zap_page(kvm, &invalid_list);
> write_unlock(&kvm->mmu_lock);
>
Powered by blists - more mailing lists