[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241113163451.GK35230@nvidia.com>
Date: Wed, 13 Nov 2024 12:34:51 -0400
From: Jason Gunthorpe <jgg@...dia.com>
To: Uros Bizjak <ubizjak@...il.com>
Cc: Arnd Bergmann <arnd@...db.de>,
Suravee Suthikulpanit <suravee.suthikulpanit@....com>,
linux-kernel@...r.kernel.org, iommu@...ts.linux.dev,
Joerg Roedel <joro@...tes.org>, Robin Murphy <robin.murphy@....com>,
vasant.hegde@....com, Linux-Arch <linux-arch@...r.kernel.org>,
Kevin Tian <kevin.tian@...el.com>, jon.grimm@....com,
santosh.shukla@....com, pandoh@...gle.com, kumaranand@...gle.com
Subject: Re: [PATCH v10 05/10] iommu/amd: Introduce helper function to update
256-bit DTE
On Wed, Nov 13, 2024 at 03:36:14PM +0100, Uros Bizjak wrote:
> On Wed, Nov 13, 2024 at 3:28 PM Jason Gunthorpe <jgg@...dia.com> wrote:
> >
> > On Wed, Nov 13, 2024 at 03:14:09PM +0100, Uros Bizjak wrote:
> > > > > Even without atomicity guarantee, __READ_ONCE() still prevents the
> > > > > compiler from performing unwanted optimizations (please see the first
> > > > > comment in include/asm-generic/rwonce.h) and unwanted reordering of
> > > > > reads and writes when this function is inlined. This macro does cast
> > > > > the read to volatile, but IMO it is much more readable to use
> > > > > __READ_ONCE() than volatile qualifier.
> > > >
> > > > Yes it does, but please explain to me what "unwanted reordering" is
> > > > allowed here?
> > >
> > > It is a static function that will be inlined by the compiler
> > > somewhere, so "unwanted reordering" depends on where it will be
> > > inlined. *IF* it will be called from safe code, then this limitation
> > > for the compiler can be lifted.
> >
> > As long as the values are read within the spinlock the order does not
> > matter. READ_ONCE() is not required to contain reads within spinlocks.
>
> Indeed. But then why complicate things with cmpxchg, when we have
> exclusive access to the shared memory? No other thread can access the
> data, protected by spinlock; it won't change between invocations of
> cmpxchg in the loop, and atomic access via cmpxchg is not needed.
This is writing to memory shared by HW and HW is doing a 256 bit
atomic load.
It is important that the CPU do a 128 bit atomic write.
cmpxchg is not required, but a 128 bit store is. cmpxchg128 is the
only primitive Linux offers.
Jason
Powered by blists - more mailing lists