[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190726110135.GO31381@hirez.programming.kicks-ass.net>
Date: Fri, 26 Jul 2019 13:01:35 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Jari Ruusu <jari.ruusu@...il.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-kernel@...r.kernel.org, stable@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>, Sasha Levin <sashal@...nel.org>
Subject: Re: [PATCH 4.19 079/271] x86/atomic: Fix
smp_mb__{before,after}_atomic()
On Fri, Jul 26, 2019 at 01:18:06PM +0300, Jari Ruusu wrote:
> Greg Kroah-Hartman wrote:
> > [ Upstream commit 69d927bba39517d0980462efc051875b7f4db185 ]
> >
> > Recent probing at the Linux Kernel Memory Model uncovered a
> > 'surprise'. Strongly ordered architectures where the atomic RmW
> > primitive implies full memory ordering and
> > smp_mb__{before,after}_atomic() are a simple barrier() (such as x86)
> > fail for:
> >
> > *x = 1;
> > atomic_inc(u);
> > smp_mb__after_atomic();
> > r0 = *y;
>
> [snip]
>
> > --- a/arch/x86/include/asm/atomic.h
> > +++ b/arch/x86/include/asm/atomic.h
> > @@ -54,7 +54,7 @@ static __always_inline void arch_atomic_add(int i, atomic_t *v)
> > {
> > asm volatile(LOCK_PREFIX "addl %1,%0"
> > : "+m" (v->counter)
> > - : "ir" (i));
> > + : "ir" (i) : "memory");
> > }
> >
> > /**
>
> Shouldn't those clobber contraints actually be: "memory","cc"
> That is because addl subl (and other) machine instructions
> actually modify the flags register too.
>
> gcc docs say: The "cc" clobber indicates that the assembler
> code modifies the flags register.
GCC x86 assumes any asm() will clobber "cc".
Powered by blists - more mailing lists