[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZG5wg3VbG4rCYrfk@google.com>
Date: Wed, 24 May 2023 13:16:03 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Kautuk Consul <kconsul@...ux.vnet.ibm.com>
Cc: Chao Peng <chao.p.peng@...ux.intel.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-api@...r.kernel.org,
linux-doc@...r.kernel.org, qemu-devel@...gnu.org,
linux-kselftest@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>,
Jonathan Corbet <corbet@....net>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H . Peter Anvin" <hpa@...or.com>,
Hugh Dickins <hughd@...gle.com>,
Jeff Layton <jlayton@...nel.org>,
"J . Bruce Fields" <bfields@...ldses.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Shuah Khan <shuah@...nel.org>, Mike Rapoport <rppt@...nel.org>,
Steven Price <steven.price@....com>,
"Maciej S . Szmigiero" <mail@...iej.szmigiero.name>,
Vlastimil Babka <vbabka@...e.cz>,
Vishal Annapurve <vannapurve@...gle.com>,
Yu Zhang <yu.c.zhang@...ux.intel.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
luto@...nel.org, jun.nakajima@...el.com, dave.hansen@...el.com,
ak@...ux.intel.com, david@...hat.com, aarcange@...hat.com,
ddutile@...hat.com, dhildenb@...hat.com,
Quentin Perret <qperret@...gle.com>,
Michael Roth <michael.roth@....com>, mhocko@...e.com,
Muchun Song <songmuchun@...edance.com>
Subject: Re: [PATCH v7 08/14] KVM: Rename mmu_notifier_*
On Wed, May 24, 2023, Kautuk Consul wrote:
> On 2023-05-23 07:19:43, Sean Christopherson wrote:
> > On Tue, May 23, 2023, Kautuk Consul wrote:
> > > On 2022-07-06 16:20:10, Chao Peng wrote:
> > > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > > > index e9153b54e2a4..c262ebb168a7 100644
> > > > --- a/include/linux/kvm_host.h
> > > > +++ b/include/linux/kvm_host.h
> > > > @@ -765,10 +765,10 @@ struct kvm {
> > > >
> > > > #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER)
> > > > struct mmu_notifier mmu_notifier;
> > > > - unsigned long mmu_notifier_seq;
> > > > - long mmu_notifier_count;
> > > > - gfn_t mmu_notifier_range_start;
> > > > - gfn_t mmu_notifier_range_end;
> > > > + unsigned long mmu_updating_seq;
> > > > + long mmu_updating_count;
> > >
> > > Can we convert mmu_updating_seq and mmu_updating_count to atomic_t ?
> >
> > Heh, can we? Yes. Should we? No.
> >
> > > I see that not all accesses to these are under the kvm->mmu_lock
> > > spinlock.
> >
> > Ya, working as intended. Ignoring gfn_to_pfn_cache for the moment, all accesses
> > to mmu_invalidate_in_progress (was mmu_notifier_count / mmu_updating_count above)
> > are done under mmu_lock. And for for mmu_notifier_seq (mmu_updating_seq above),
> > all writes and some reads are done under mmu_lock. The only reads that are done
> > outside of mmu_lock are the initial snapshots of the sequence number.
> >
> > gfn_to_pfn_cache uses a different locking scheme, the comments in
> > mmu_notifier_retry_cache() do a good job explaining the ordering.
> >
> > > This will also remove the need for putting separate smp_wmb() and
> > > smp_rmb() memory barriers while accessing these structure members.
> >
> > No, the memory barriers aren't there to provide any kind of atomicity. The barriers
> > exist to ensure that stores and loads to/from the sequence and invalidate in-progress
> > counts are ordered relative to the invalidation (stores to counts) and creation (loads)
> > of SPTEs. Making the counts atomic changes nothing because atomic operations don't
> > guarantee the necessary ordering.
> I'm not saying that the memory barriers provide atomicity.
> My comment was based on the assumption that "all atomic operations are
> implicit memory barriers". If that assumption is true then we won't need
> the memory barriers here if we use atomic operations for protecting
> these 2 structure members.
Atomics aren't memory barriers on all architectures, e.g. see the various
definitions of smp_mb__after_atomic().
Even if atomic operations did provide barriers, using an atomic would be overkill
and a net negative. On strongly ordered architectures like x86, memory barriers are
just compiler barriers, whereas atomics may be more expensive. Of course, the only
accesses outside of mmu_lock are reads, so on x86 that "atomic" access is just a
READ_ONCE() load, but that's not the case for all architectures.
Anyways, the point is that atomics and memory barriers are different things that
serve different purposes.
Powered by blists - more mailing lists