[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190503181508.GQ8599@gate.crashing.org>
Date: Fri, 3 May 2019 13:15:08 -0500
From: Segher Boessenkool <segher@...nel.crashing.org>
To: Christophe Leroy <christophe.leroy@....fr>
Cc: linux-kernel@...r.kernel.org, Scott Wood <oss@...error.net>,
Paul Mackerras <paulus@...ba.org>,
linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH] powerpc/32: Remove memory clobber asm constraint on dcbX() functions
Hi Christophe,
On Fri, May 03, 2019 at 04:14:13PM +0200, Christophe Leroy wrote:
> A while ago I proposed the following patch, and didn't get any comment
> back on it.
I didn't see it. Maybe because of holiday :-)
> Do you have any opinion on it ? Is it good and worth it ?
> Le 09/01/2018 à 07:57, Christophe Leroy a écrit :
> >Instead of just telling GCC that dcbz(), dcbi(), dcbf() and dcbst()
> >clobber memory, tell it what it clobbers:
> >* dcbz(), dcbi() and dcbf() clobbers one cacheline as output
> >* dcbf() and dcbst() clobbers one cacheline as input
You cannot "clobber input".
Seen another way, only dcbi clobbers anything; dcbz zeroes it instead,
and dcbf and dcbst only change in what caches the data hangs out.
> >--- a/arch/powerpc/include/asm/cache.h
> >+++ b/arch/powerpc/include/asm/cache.h
> >@@ -82,22 +82,31 @@ extern void _set_L3CR(unsigned long);
> >
> > static inline void dcbz(void *addr)
> > {
> >- __asm__ __volatile__ ("dcbz 0, %0" : : "r"(addr) : "memory");
> >+ __asm__ __volatile__ ("dcbz 0, %1" :
> >+ "=m"(*(char (*)[L1_CACHE_BYTES])addr) :
> >+ "r"(addr) :);
> > }
The instruction does *not* work on the memory pointed to by addr. It
works on the cache line containing the address addr.
If you want to have addr always aligned, you need to document this, and
check all callers, etc.
> > static inline void dcbf(void *addr)
> > {
> >- __asm__ __volatile__ ("dcbf 0, %0" : : "r"(addr) : "memory");
> >+ __asm__ __volatile__ ("dcbf 0, %1" :
> >+ "=m"(*(char (*)[L1_CACHE_BYTES])addr) :
> >+ "r"(addr), "m"(*(char
> >(*)[L1_CACHE_BYTES])addr) :
> >+ );
> > }
Newline damage... Was that your mailer?
Also, you may want a "memory" clobber anyway, to get ordering correct
for the synchronisation instructions.
I think your changes make things less robust than they were before.
[ Btw. Instead of
__asm__ __volatile__ ("dcbf 0, %0" : : "r"(addr) : "memory");
you can do
__asm__ __volatile__ ("dcbf %0" : : "Z"(addr) : "memory");
to save some insns here and there. ]
Segher
Powered by blists - more mailing lists