[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADrL8HXoGeafwNvSyMF0=MMjkbodyYrA6+J638QXgyC_TW76XQ@mail.gmail.com>
Date: Thu, 25 Jul 2024 17:34:29 -0700
From: James Houghton <jthoughton@...gle.com>
To: David Matlack <dmatlack@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Paolo Bonzini <pbonzini@...hat.com>,
Ankit Agrawal <ankita@...dia.com>, Axel Rasmussen <axelrasmussen@...gle.com>,
Catalin Marinas <catalin.marinas@....com>, David Rientjes <rientjes@...gle.com>,
James Morse <james.morse@....com>, Jason Gunthorpe <jgg@...pe.ca>, Jonathan Corbet <corbet@....net>,
Marc Zyngier <maz@...nel.org>, Oliver Upton <oliver.upton@...ux.dev>,
Raghavendra Rao Ananta <rananta@...gle.com>, Ryan Roberts <ryan.roberts@....com>,
Sean Christopherson <seanjc@...gle.com>, Shaoqin Huang <shahuang@...hat.com>,
Suzuki K Poulose <suzuki.poulose@....com>, Wei Xu <weixugc@...gle.com>,
Will Deacon <will@...nel.org>, Yu Zhao <yuzhao@...gle.com>, Zenghui Yu <yuzenghui@...wei.com>,
kvmarm@...ts.linux.dev, kvm@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v6 02/11] KVM: x86: Relax locking for kvm_test_age_gfn and kvm_age_gfn
On Thu, Jul 25, 2024 at 11:08 AM David Matlack <dmatlack@...gle.com> wrote:
>
> On 2024-07-24 01:10 AM, James Houghton wrote:
> > Walk the TDP MMU in an RCU read-side critical section. This requires a
> > way to do RCU-safe walking of the tdp_mmu_roots; do this with a new
> > macro. The PTE modifications are now done atomically, and
> > kvm_tdp_mmu_spte_need_atomic_write() has been updated to account for the
> > fact that kvm_age_gfn can now lockless update the accessed bit and the
> > R/X bits).
> >
> > If the cmpxchg for marking the spte for access tracking fails, we simply
> > retry if the spte is still a leaf PTE. If it isn't, we return false
> > to continue the walk.
> >
> > Harvesting age information from the shadow MMU is still done while
> > holding the MMU write lock.
> >
> > Suggested-by: Yu Zhao <yuzhao@...gle.com>
> > Signed-off-by: James Houghton <jthoughton@...gle.com>
>
> Aside from the comment fixes below,
>
> Reviewed-by: David Matlack <dmatlack@...gle.com>
Thank you!
>
> > ---
> > arch/x86/include/asm/kvm_host.h | 1 +
> > arch/x86/kvm/Kconfig | 1 +
> > arch/x86/kvm/mmu/mmu.c | 10 ++++-
> > arch/x86/kvm/mmu/tdp_iter.h | 27 +++++++------
> > arch/x86/kvm/mmu/tdp_mmu.c | 67 +++++++++++++++++++++++++--------
> > 5 files changed, 77 insertions(+), 29 deletions(-)
> >
> [...]
> > --- a/arch/x86/kvm/mmu/tdp_iter.h
> > +++ b/arch/x86/kvm/mmu/tdp_iter.h
> > @@ -25,6 +25,13 @@ static inline u64 kvm_tdp_mmu_write_spte_atomic(tdp_ptep_t sptep, u64 new_spte)
> > return xchg(rcu_dereference(sptep), new_spte);
> > }
> >
> > +static inline u64 tdp_mmu_clear_spte_bits_atomic(tdp_ptep_t sptep, u64 mask)
> > +{
> > + atomic64_t *sptep_atomic = (atomic64_t *)rcu_dereference(sptep);
> > +
> > + return (u64)atomic64_fetch_and(~mask, sptep_atomic);
> > +}
> > +
> > static inline void __kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 new_spte)
> > {
> > KVM_MMU_WARN_ON(is_ept_ve_possible(new_spte));
> > @@ -32,10 +39,11 @@ static inline void __kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 new_spte)
> > }
> >
> > /*
> > - * SPTEs must be modified atomically if they are shadow-present, leaf
> > - * SPTEs, and have volatile bits, i.e. has bits that can be set outside
> > - * of mmu_lock. The Writable bit can be set by KVM's fast page fault
> > - * handler, and Accessed and Dirty bits can be set by the CPU.
> > + * SPTEs must be modified atomically if they have bits that can be set outside
> > + * of the mmu_lock. This can happen for any shadow-present leaf SPTEs, as the
> > + * Writable bit can be set by KVM's fast page fault handler, the Accessed and
> > + * Dirty bits can be set by the CPU, and the Accessed and R/X bits can be
>
> "R/X bits" should be "W/R/X bits".
Thanks. Right, we are clearing all of VMX_EPT_RWX_MASK.
>
> > + * cleared by age_gfn_range.
>
> nit: "age_gfn_range()"
Thanks, will fix this and all the other places where I've left off the ().
Powered by blists - more mailing lists