[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFULd4aFvGj=kz5Si9WpAr33KFtJDO5+sdNO=NBB+boS=E-E_Q@mail.gmail.com>
Date: Wed, 13 Nov 2024 15:36:14 +0100
From: Uros Bizjak <ubizjak@...il.com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: Arnd Bergmann <arnd@...db.de>, Suravee Suthikulpanit <suravee.suthikulpanit@....com>,
linux-kernel@...r.kernel.org, iommu@...ts.linux.dev,
Joerg Roedel <joro@...tes.org>, Robin Murphy <robin.murphy@....com>, vasant.hegde@....com,
Linux-Arch <linux-arch@...r.kernel.org>, Kevin Tian <kevin.tian@...el.com>, jon.grimm@....com,
santosh.shukla@....com, pandoh@...gle.com, kumaranand@...gle.com
Subject: Re: [PATCH v10 05/10] iommu/amd: Introduce helper function to update
256-bit DTE
On Wed, Nov 13, 2024 at 3:28 PM Jason Gunthorpe <jgg@...dia.com> wrote:
>
> On Wed, Nov 13, 2024 at 03:14:09PM +0100, Uros Bizjak wrote:
> > > > Even without atomicity guarantee, __READ_ONCE() still prevents the
> > > > compiler from performing unwanted optimizations (please see the first
> > > > comment in include/asm-generic/rwonce.h) and unwanted reordering of
> > > > reads and writes when this function is inlined. This macro does cast
> > > > the read to volatile, but IMO it is much more readable to use
> > > > __READ_ONCE() than volatile qualifier.
> > >
> > > Yes it does, but please explain to me what "unwanted reordering" is
> > > allowed here?
> >
> > It is a static function that will be inlined by the compiler
> > somewhere, so "unwanted reordering" depends on where it will be
> > inlined. *IF* it will be called from safe code, then this limitation
> > for the compiler can be lifted.
>
> As long as the values are read within the spinlock the order does not
> matter. READ_ONCE() is not required to contain reads within spinlocks.
Indeed. But then why complicate things with cmpxchg, when we have
exclusive access to the shared memory? No other thread can access the
data, protected by spinlock; it won't change between invocations of
cmpxchg in the loop, and atomic access via cmpxchg is not needed.
Uros.
Powered by blists - more mailing lists