[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMzpN2j6gp-+zwa+meWbGC_TCEJF0GSC-xQ3mdMU07DxDR+pmA@mail.gmail.com>
Date: Mon, 18 May 2020 19:45:53 -0400
From: Brian Gerst <brgerst@...il.com>
To: Nick Desaulniers <ndesaulniers@...gle.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>, Borislav Petkov <bp@...en8.de>,
"H . Peter Anvin" <hpa@...or.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH 5/7] x86/percpu: Clean up percpu_add_return_op()
On Mon, May 18, 2020 at 6:46 PM Nick Desaulniers
<ndesaulniers@...gle.com> wrote:
>
> On Sun, May 17, 2020 at 8:29 AM Brian Gerst <brgerst@...il.com> wrote:
> >
> > The core percpu macros already have a switch on the data size, so the switch
> > in the x86 code is redundant and produces more dead code.
> >
> > Also use appropriate types for the width of the instructions. This avoids
> > errors when compiling with Clang.
> >
> > Signed-off-by: Brian Gerst <brgerst@...il.com>
> > ---
> > arch/x86/include/asm/percpu.h | 51 +++++++++++------------------------
> > 1 file changed, 16 insertions(+), 35 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
> > index 21c5013a681a..ac8c391a190e 100644
> > --- a/arch/x86/include/asm/percpu.h
> > +++ b/arch/x86/include/asm/percpu.h
> > @@ -199,34 +199,15 @@ do { \
> > /*
> > * Add return operation
> > */
> > -#define percpu_add_return_op(qual, var, val) \
> > +#define percpu_add_return_op(size, qual, _var, _val) \
> > ({ \
> > - typeof(var) paro_ret__ = val; \
> > - switch (sizeof(var)) { \
> > - case 1: \
> > - asm qual ("xaddb %0, "__percpu_arg(1) \
> > - : "+q" (paro_ret__), "+m" (var) \
> > - : : "memory"); \
> > - break; \
> > - case 2: \
> > - asm qual ("xaddw %0, "__percpu_arg(1) \
> > - : "+r" (paro_ret__), "+m" (var) \
> > - : : "memory"); \
> > - break; \
> > - case 4: \
> > - asm qual ("xaddl %0, "__percpu_arg(1) \
> > - : "+r" (paro_ret__), "+m" (var) \
> > - : : "memory"); \
> > - break; \
> > - case 8: \
> > - asm qual ("xaddq %0, "__percpu_arg(1) \
> > - : "+re" (paro_ret__), "+m" (var) \
>
> ^ before we use the "+re" constraint for 8B input.
>
> > - : : "memory"); \
> > - break; \
> > - default: __bad_percpu_size(); \
>
> Comment on the series as a whole. After applying the series, the
> final reference to __bad_percpu_size and switch statement in
> arch/x86/include/asm/percpu.h in the definition of the
> percpu_stable_op() macro. If you clean that up, too, then the rest of
> this file feels more consistent with your series, even if it's not a
> blocker for Clang i386 support. Then you can get rid of
> __bad_percpu_size, too!
I haven't yet figured out what to do with percpu_stable_op(). It's
x86-specific, so there isn't another switch in the core code. I think
it is supposed to be similar to READ_ONCE() but for percpu variables,
but I'm not 100% sure.
> > - } \
> > - paro_ret__ += val; \
> > - paro_ret__; \
> > + __pcpu_type_##size paro_tmp__ = __pcpu_cast_##size(_val); \
> > + asm qual (__pcpu_op2_##size("xadd", "%[tmp]", \
> > + __percpu_arg([var])) \
> > + : [tmp] __pcpu_reg_##size("+", paro_tmp__), \
>
> ^ after, for `size == 8`, we use "+r". [0] says for "e":
>
> 32-bit signed integer constant, or a symbolic reference known to fit
> that range (for immediate operands in sign-extending x86-64
> instructions).
>
> I'm guessing we're restricting the input to not allow for 64b signed
> integer constants? Looking at the documentation for `xadd` (ie.
> "exchange and add") [1], it looks like immediates are not allowed as
> operands, only registers or memory addresses. So it seems that "e"
> was never necessary. It might be helpful to note that in the commit
> message, should you end up sending a v2 of the series. Maybe some
> folks with more x86 inline asm experience can triple check/verify?
That is correct. The "e" constraint shouldn't have been there, since
XADD doesn't allow immediates. I'll make that clearer in V2.
--
Brian Gerst
Powered by blists - more mailing lists