lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 14 Apr 2021 08:23:51 +0800
From:   Guo Ren <guoren@...nel.org>
To:     Catalin Marinas <catalin.marinas@....com>
Cc:     Christoph Müllner <christophm30@...il.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Palmer Dabbelt <palmer@...belt.com>,
        Anup Patel <anup@...infault.org>,
        linux-riscv <linux-riscv@...ts.infradead.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Guo Ren <guoren@...ux.alibaba.com>,
        Will Deacon <will.deacon@....com>,
        Arnd Bergmann <arnd@...db.de>
Subject: Re: [PATCH] riscv: locks: introduce ticket-based spinlock implementation

On Tue, Apr 13, 2021 at 5:31 PM Catalin Marinas <catalin.marinas@....com> wrote:
>
> On Tue, Apr 13, 2021 at 11:22:40AM +0200, Christoph Müllner wrote:
> > On Tue, Apr 13, 2021 at 10:03 AM Peter Zijlstra <peterz@...radead.org> wrote:
> > > On Mon, Apr 12, 2021 at 11:54:55PM +0200, Christoph Müllner wrote:
> > > > On Mon, Apr 12, 2021 at 7:33 PM Palmer Dabbelt <palmer@...belt.com> wrote:
> > > > > My plan is to add a generic ticket-based lock, which can be selected at
> > > > > compile time.  It'll have no architecture dependencies (though it'll
> > > > > likely have some hooks for architectures that can make this go faster).
> > > > > Users can then just pick which spinlock flavor they want, with the idea
> > > > > being that smaller systems will perform better with ticket locks and
> > > > > larger systems will perform better with queued locks.  The main goal
> > > > > here is to give the less widely used architectures an easy way to have
> > > > > fair locks, as right now we've got a lot of code duplication because any
> > > > > architecture that wants ticket locks has to do it themselves.
> > > >
> > > > In the case of LL/SC sequences, we have a maximum of 16 instructions
> > > > on RISC-V. My concern with a pure-C implementation would be that
> > > > we cannot guarantee this (e.g. somebody wants to compile with -O0)
> > > > and I don't know of a way to abort the build in case this limit exceeds.
> > > > Therefore I have preferred inline assembly for OpenSBI (my initial idea
> > > > was to use closure-like LL/SC macros, where you can write the loop
> > > > in form of C code).
> > >
> > > For ticket locks you really only needs atomic_fetch_add() and
> > > smp_store_release() and an architectural guarantees that the
> > > atomic_fetch_add() has fwd progress under contention and that a sub-word
> > > store (through smp_store_release()) will fail the SC.
> > >
> > > Then you can do something like:
> > >
> > > void lock(atomic_t *lock)
> > > {
> > >         u32 val = atomic_fetch_add(1<<16, lock); /* SC, gives us RCsc */
> > >         u16 ticket = val >> 16;
> > >
> > >         for (;;) {
> > >                 if (ticket == (u16)val)
> > >                         break;
> > >                 cpu_relax();
> > >                 val = atomic_read_acquire(lock);
> > >         }
> > > }
> > >
> > > void unlock(atomic_t *lock)
> > > {
> > >         u16 *ptr = (u16 *)lock + (!!__BIG_ENDIAN__);
> > >         u32 val = atomic_read(lock);
> > >
> > >         smp_store_release(ptr, (u16)val + 1);
> > > }
> > >
> > > That's _almost_ as simple as a test-and-set :-) It isn't quite optimal
> > > on x86 for not being allowed to use a memop on unlock, since its being
> > > forced into a load-store because of all the volatile, but whatever.
> >
> > What about trylock()?
> > I.e. one could implement trylock() without a loop, by letting
> > trylock() fail if the SC fails.
> > That looks safe on first view, but nobody does this right now.
I think it's safe for riscv LR/SC, because in spec A 8.3 section:
"As a consequence of the eventuality guarantee, if some harts in an
execution environment are executing constrained LR/SC loops, and no
other harts or devices in the execution environment execute an
unconditional store or AMO to that reservation set, then at least one
hart will eventually exit its constrained LR/SC loop."

So it guarantees LR/SC pair:

CPU0                   CPU1
=======             =======
LR addr1
                            LR addr1
                            SC addr1 // guarantee success.
SC addr1

But not guarantee, another hart unconditional store (which I mentioned before):
u32 a = 0x55aa66bb;
u16 *ptr = &a;

CPU0                       CPU1
=========             =========
xchg16(ptr, new)     while(1)
                                    WRITE_ONCE(*(ptr + 1), x);



>
> Not familiar with RISC-V but I'd recommend that a trylock only fails if
> the lock is locked (after LR). A SC may fail for other reasons
> (cacheline eviction; depending on the microarchitecture) even if the
> lock is unlocked. At least on arm64 we had this issue with an
> implementation having a tendency to always fail the first STXR.

I think it's a broken implementation for riscv. SC couldn't fail by
cache line bouncing and only could fail by another real write.
That means the HW implementation should use a per-hart address monitor
not just grab the cache line into the exclusive state without lockdown
the SNOOP channel.
I think the implementation of LR/SC you mentioned is a gambling style
but broke the riscv spec.

Is the patch of Will's would fix up the problem you mentioned?
----
commit 9bb17be062de6f5a9c9643258951aa0935652ec3
Author: Will Deacon <will.deacon@....com>
Date:   Tue Jul 2 14:54:33 2013 +0100

    ARM: locks: prefetch the destination word for write prior to strex

    The cost of changing a cacheline from shared to exclusive state can be
    significant, especially when this is triggered by an exclusive store,
    since it may result in having to retry the transaction.

    This patch prefixes our {spin,read,write}_[try]lock implementations with
    pldw instructions (on CPUs which support them) to try and grab the line
    in exclusive state from the start. arch_rwlock_t is changed to avoid
    using a volatile member, since this generates compiler warnings when
    falling back on the __builtin_prefetch intrinsic which expects a const
    void * argument.

    Acked-by: Nicolas Pitre <nico@...aro.org>
    Signed-off-by: Will Deacon <will.deacon@....com>
----

In the end, I want to conclude my suggestions here:
 - Using ticket-lock as default
 - Using ARCH_USE_QUEUED_SPINLOCKS_XCHG32 for riscv qspinlock
 - Disable xhg16/cmxchg16 and any sub-word atomic primitive in riscv

--
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ