lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJF2gTSAxpAi=LbAdu7jntZRUa=-dJwL0VfmDfBV5MHB=rcZ-w@mail.gmail.com>
Date:   Mon, 11 Apr 2022 21:20:04 +0800
From:   Guo Ren <guoren@...nel.org>
To:     Mark Rutland <mark.rutland@....com>
Cc:     Palmer Dabbelt <palmer@...osinc.com>,
        Arnd Bergmann <arnd@...db.de>,
        Peter Zijlstra <peterz@...radead.org>,
        linux-riscv <linux-riscv@...ts.infradead.org>,
        linux-arch <linux-arch@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Guo Ren <guoren@...ux.alibaba.com>,
        Palmer Dabbelt <palmer@...belt.com>
Subject: Re: [PATCH] riscv: Optimize AMO acquire/release usage

Hi Mark,

On Mon, Apr 11, 2022 at 5:35 PM Mark Rutland <mark.rutland@....com> wrote:
>
> Hi Guo,
>
> On Wed, Apr 06, 2022 at 08:04:05PM +0800, guoren@...nel.org wrote:
> > From: Guo Ren <guoren@...ux.alibaba.com>
> >
> > Using RISCV_ACQUIRE/RELEASE_BARRIER is over expensive for
> > xchg/cmpxchg_acquire/release than nature instructions' .aq/rl.
> > The patch fixed these issues under RISC-V Instruction Set Manual,
> > Volume I: RISC-V User-Level ISA “A” Standard Extension for Atomic
> > Instructions, Version 2.1.
> >
> > Signed-off-by: Guo Ren <guoren@...ux.alibaba.com>
> > Signed-off-by: Guo Ren <guoren@...nel.org>
> > Cc: Palmer Dabbelt <palmer@...belt.com>
> > ---
> >  arch/riscv/include/asm/atomic.h  | 70 ++++++++++++++++++++++++++++++--
> >  arch/riscv/include/asm/cmpxchg.h | 30 +++++---------
> >  2 files changed, 76 insertions(+), 24 deletions(-)
>
> I'll leave the bulk of this to Palmer, but I spotted something below which
> doesn't look right.
>
> > @@ -315,12 +379,11 @@ static __always_inline int arch_atomic_sub_if_positive(atomic_t *v, int offset)
> >         int prev, rc;
> >
> >       __asm__ __volatile__ (
> > -             "0:     lr.w     %[p],  %[c]\n"
> > +             "0:     lr.w.aq  %[p],  %[c]\n"
> >               "       sub      %[rc], %[p], %[o]\n"
> >               "       bltz     %[rc], 1f\n"
> >               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> >               "       bnez     %[rc], 0b\n"
> > -             "       fence    rw, rw\n"
> >               "1:\n"
> >               : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter)
> >               : [o]"r" (offset)
>
> I believe in this case the existing code here is correct, and this optimization
> is broken.
Yes, you are right, My patch would break the memory consistency of
riscv between acquire & release. Thx for your corrections.

>
> I believe the existing code is using RELEASE + FULL-BARRIER to ensure full
> ordering, since separate ACQUIRE+RELEASE cannot. For a description of the
> problem, see the commit message for:
I've another question: The RELEASE(prevent ACCESS-A after stlxr) +
FULL-BARRIER is for arm64 because there is no "stlaxr" for arm64,
right? We could use sc.w.aqrl directly for riscv to reduce a release
fence.

New patch:
       __asm__ __volatile__ (
              "0:     lr.w     %[p],  %[c]\n"
               "       sub      %[rc], %[p], %[o]\n"
               "       bltz     %[rc], 1f\n"
-              "       sc.w.rl  %[rc], %[rc], %[c]\n"
+              "       sc.w.aqrl  %[rc], %[rc], %[c]\n"
               "       bnez     %[rc], 0b\n"
 -             "       fence    rw, rw\n"
               "1:\n"
               : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter)
               : [o]"r" (offset)

(It surprises me that seems lr.w.aq is useless for the real world.)

>
>   8e86f0b409a44193 ("arm64: atomics: fix use of acquire + release for full barrier semantics")
>
> The gist is that HW can re-order:
>
>         ACCESS-A
>         ACQUIRE
>         RELEASE
>         ACCESS-B
>
> ... to:
>
>         ACQUIRE
>         ACCESS-B
>         ACCESS-A
>         RELEASE
>
> ... violating FULL ordering semantics.
>
> This will apply for *any* operation where FULL orderingis required, which I
> suspect applies to some more cases below.


>
> > @@ -337,12 +400,11 @@ static __always_inline s64 arch_atomic64_sub_if_positive(atomic64_t *v, s64 offs
> >         long rc;
> >
> >       __asm__ __volatile__ (
> > -             "0:     lr.d     %[p],  %[c]\n"
> > +             "0:     lr.d.aq  %[p],  %[c]\n"
> >               "       sub      %[rc], %[p], %[o]\n"
> >               "       bltz     %[rc], 1f\n"
> >               "       sc.d.rl  %[rc], %[rc], %[c]\n"
> >               "       bnez     %[rc], 0b\n"
> > -             "       fence    rw, rw\n"
> >               "1:\n"
> >               : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter)
> >               : [o]"r" (offset)
>
> My comment for arch_atomic_sub_if_positive() applies here too.
>
>
> [...]
>
> > @@ -309,11 +301,10 @@
> >       switch (size) {                                                 \
> >       case 4:                                                         \
> >               __asm__ __volatile__ (                                  \
> > -                     "0:     lr.w %0, %2\n"                          \
> > +                     "0:     lr.w.aq %0, %2\n"                       \
> >                       "       bne  %0, %z3, 1f\n"                     \
> >                       "       sc.w.rl %1, %z4, %2\n"                  \
> >                       "       bnez %1, 0b\n"                          \
> > -                     "       fence rw, rw\n"                         \
> >                       "1:\n"                                          \
> >                       : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
> >                       : "rJ" ((long)__old), "rJ" (__new)              \
> > @@ -321,11 +312,10 @@
> >               break;                                                  \
> >       case 8:                                                         \
> >               __asm__ __volatile__ (                                  \
> > -                     "0:     lr.d %0, %2\n"                          \
> > +                     "0:     lr.d.aq %0, %2\n"                       \
> >                       "       bne %0, %z3, 1f\n"                      \
> >                       "       sc.d.rl %1, %z4, %2\n"                  \
> >                       "       bnez %1, 0b\n"                          \
> > -                     "       fence rw, rw\n"                         \
> >                       "1:\n"                                          \
> >                       : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
> >                       : "rJ" (__old), "rJ" (__new)                    \
>
> I don't have enough context to say for sure, but I suspect these are expecting
> FULL ordering too, and would be broken, as above.
>
> Thanks,
> Mark.



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ