[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZHD2rYBCSe5OSYIU@google.com>
Date: Fri, 26 May 2023 11:13:01 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Uros Bizjak <ubizjak@...il.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>,
David Matlack <dmatlack@...gle.com>
Subject: Re: [PATCH] KVM: x86/mmu: Add comment on try_cmpxchg64 usage in tdp_mmu_set_spte_atomic
On Tue, Apr 25, 2023, Uros Bizjak wrote:
> Commit aee98a6838d5 ("KVM: x86/mmu: Use try_cmpxchg64 in
> tdp_mmu_set_spte_atomic") removed the comment that iter->old_spte is
> updated when different logical CPU modifies the page table entry.
> Although this is what try_cmpxchg does implicitly, it won't hurt
> if this fact is explicitly mentioned in a restored comment.
>
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: Sean Christopherson <seanjc@...gle.com>
> Cc: David Matlack <dmatlack@...gle.com>
> Signed-off-by: Uros Bizjak <ubizjak@...il.com>
> ---
> arch/x86/kvm/mmu/tdp_mmu.c | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 7c25dbf32ecc..5d126b015086 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -655,8 +655,16 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
> * Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs and
> * does not hold the mmu_lock.
> */
> - if (!try_cmpxchg64(sptep, &iter->old_spte, new_spte))
> + if (!try_cmpxchg64(sptep, &iter->old_spte, new_spte)) {
> + /*
> + * The page table entry was modified by a different logical
> + * CPU. In this case the above try_cmpxchg updates
> + * iter->old_spte with the current value, so the caller
> + * operates on fresh data, e.g. if it retries
> + * tdp_mmu_set_spte_atomic().
> + */
If there's no objection, when applying I'll massage this to extend the comment
above the try_cmpxchg64(), e.g.
/*
* Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs and
* does not hold the mmu_lock. On failure, i.e. if a different logical
* CPU modified the SPTE, try_cmpxchg64() updates iter->old_spte with
* the current value, so the caller operates on fresh data, e.g. if it
* retries tdp_mmu_set_spte_atomic()
*/
if (!try_cmpxchg64(sptep, &iter->old_spte, new_spte))
return -EBUSY;
Powered by blists - more mailing lists