lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFULd4bwyrsXFXJzZQnGyZsQo_RisQLR7qkjMdBBONQzp67xCg@mail.gmail.com>
Date: Wed, 13 Nov 2024 21:08:34 +0100
From: Uros Bizjak <ubizjak@...il.com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: Arnd Bergmann <arnd@...db.de>, Suravee Suthikulpanit <suravee.suthikulpanit@....com>, 
	linux-kernel@...r.kernel.org, iommu@...ts.linux.dev, 
	Joerg Roedel <joro@...tes.org>, Robin Murphy <robin.murphy@....com>, vasant.hegde@....com, 
	Linux-Arch <linux-arch@...r.kernel.org>, Kevin Tian <kevin.tian@...el.com>, jon.grimm@....com, 
	santosh.shukla@....com, pandoh@...gle.com, kumaranand@...gle.com
Subject: Re: [PATCH v10 05/10] iommu/amd: Introduce helper function to update
 256-bit DTE

On Wed, Nov 13, 2024 at 8:50 PM Uros Bizjak <ubizjak@...il.com> wrote:
>
> On Wed, Nov 13, 2024 at 5:34 PM Jason Gunthorpe <jgg@...dia.com> wrote:
> >
> > On Wed, Nov 13, 2024 at 03:36:14PM +0100, Uros Bizjak wrote:
> > > On Wed, Nov 13, 2024 at 3:28 PM Jason Gunthorpe <jgg@...dia.com> wrote:
> > > >
> > > > On Wed, Nov 13, 2024 at 03:14:09PM +0100, Uros Bizjak wrote:
> > > > > > > Even without atomicity guarantee, __READ_ONCE() still prevents the
> > > > > > > compiler from performing unwanted optimizations (please see the first
> > > > > > > comment in include/asm-generic/rwonce.h) and unwanted reordering of
> > > > > > > reads and writes when this function is inlined. This macro does cast
> > > > > > > the read to volatile, but IMO it is much more readable to use
> > > > > > > __READ_ONCE() than volatile qualifier.
> > > > > >
> > > > > > Yes it does, but please explain to me what "unwanted reordering" is
> > > > > > allowed here?
> > > > >
> > > > > It is a static function that will be inlined by the compiler
> > > > > somewhere, so "unwanted reordering" depends on where it will be
> > > > > inlined. *IF* it will be called from safe code, then this limitation
> > > > > for the compiler can be lifted.
> > > >
> > > > As long as the values are read within the spinlock the order does not
> > > > matter. READ_ONCE() is not required to contain reads within spinlocks.
> > >
> > > Indeed. But then why complicate things with cmpxchg, when we have
> > > exclusive access to the shared memory? No other thread can access the
> > > data, protected by spinlock; it won't change between invocations of
> > > cmpxchg in the loop, and atomic access via cmpxchg is not needed.
> >
> > This is writing to memory shared by HW and HW is doing a 256 bit
> > atomic load.
> >
> > It is important that the CPU do a 128 bit atomic write.
> >
> > cmpxchg is not required, but a 128 bit store is. cmpxchg128 is the
> > only primitive Linux offers.
>
> If we want to exercise only the atomic property of cmpxchg16b, then we
> can look at arch/x86/lib/atomic64_set_cx8.S how cmpxchg8b is used to
> implement the core of arch_atomic64_set() for x86_32:
>
> SYM_FUNC_START(atomic64_set_cx8)
> 1:
> /* we don't need LOCK_PREFIX since aligned 64-bit writes
>  * are atomic on 586 and newer */
>     cmpxchg8b (%esi)
>     jne 1b
>
>     RET
> SYM_FUNC_END(atomic64_set_cx8)
>
> we *do* have arch_try_cmpxchg128_local() that declares cmpxchg16b
> without lock prefix, and perhaps we can use it to create 128bit store,
> something like:
>
> static __always_inline void iommu_atomic128_set(__int128 *ptr, __int128 val)
> {
>     __int128 old = *ptr;
>
>     do {
>     } while (!arch_try_cmpxchg128_local(ptr, &old, val));
> }

Eh, I forgot that data does not change, so we don't need the cmpxchg loop.

static __always_inline void iommu_atomic128_set(__int128 *ptr, __int128 val)
 {
     arch_cmpxchg128_local(ptr, *ptr, val);
}

should do the trick.

Uros.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ