lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y/blLAj2IcX5jSZU@li-a450e7cc-27df-11b2-a85c-b5a9ac31e8ef.ibm.com>
Date:   Thu, 23 Feb 2023 09:31:48 +0530
From:   Kautuk Consul <kconsul@...ux.vnet.ibm.com>
To:     "Paul E. McKenney" <paulmck@...nel.org>
Cc:     Michael Ellerman <mpe@...erman.id.au>,
        Nicholas Piggin <npiggin@...il.com>,
        Christophe Leroy <christophe.leroy@...roup.eu>,
        Rohan McLure <rmclure@...ux.ibm.com>,
        linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and
 wmb to  lwsync

On 2023-02-22 09:47:19, Paul E. McKenney wrote:
> On Wed, Feb 22, 2023 at 02:33:44PM +0530, Kautuk Consul wrote:
> > A link from ibm.com states:
> > "Ensures that all instructions preceding the call to __lwsync
> >  complete before any subsequent store instructions can be executed
> >  on the processor that executed the function. Also, it ensures that
> >  all load instructions preceding the call to __lwsync complete before
> >  any subsequent load instructions can be executed on the processor
> >  that executed the function. This allows you to synchronize between
> >  multiple processors with minimal performance impact, as __lwsync
> >  does not wait for confirmation from each processor."
> > 
> > Thats why smp_rmb() and smp_wmb() are defined to lwsync.
> > But this same understanding applies to parallel pipeline
> > execution on each PowerPC processor.
> > So, use the lwsync instruction for rmb() and wmb() on the PPC
> > architectures that support it.
> > 
> > Signed-off-by: Kautuk Consul <kconsul@...ux.vnet.ibm.com>
> > ---
> >  arch/powerpc/include/asm/barrier.h | 7 +++++++
> >  1 file changed, 7 insertions(+)
> > 
> > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > index b95b666f0374..e088dacc0ee8 100644
> > --- a/arch/powerpc/include/asm/barrier.h
> > +++ b/arch/powerpc/include/asm/barrier.h
> > @@ -36,8 +36,15 @@
> >   * heavy-weight sync, so smp_wmb() can be a lighter-weight eieio.
> >   */
> >  #define __mb()   __asm__ __volatile__ ("sync" : : : "memory")
> > +
> > +/* The sub-arch has lwsync. */
> > +#if defined(CONFIG_PPC64) || defined(CONFIG_PPC_E500MC)
> > +#define __rmb() __asm__ __volatile__ ("lwsync" : : : "memory")
> > +#define __wmb() __asm__ __volatile__ ("lwsync" : : : "memory")
> 
> Hmmm...
> 
> Does the lwsync instruction now order both cached and uncached accesses?
> Or have there been changes so that smp_rmb() and smp_wmb() get this
> definition, while rmb() and wmb() still get the sync instruction?
> (Not seeing this, but I could easily be missing something.)
> 
> 							Thanx, Paul
Upfront I don't see any documentation that states that lwsync
distinguishes between cached and uncached accesses.
That's why I requested the mailing list for test results with
kernel load testing.
> 
> > +#else
> >  #define __rmb()  __asm__ __volatile__ ("sync" : : : "memory")
> >  #define __wmb()  __asm__ __volatile__ ("sync" : : : "memory")
> > +#endif
> >  
> >  /* The sub-arch has lwsync */
> >  #if defined(CONFIG_PPC64) || defined(CONFIG_PPC_E500MC)
> > -- 
> > 2.31.1
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ