[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YHawVOIHmDPbTmoB@hirez.programming.kicks-ass.net>
Date: Wed, 14 Apr 2021 11:05:24 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Guo Ren <guoren@...nel.org>
Cc: Christoph Müllner <christophm30@...il.com>,
Palmer Dabbelt <palmer@...belt.com>,
Anup Patel <anup@...infault.org>,
linux-riscv <linux-riscv@...ts.infradead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Guo Ren <guoren@...ux.alibaba.com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will.deacon@....com>,
Arnd Bergmann <arnd@...db.de>
Subject: Re: [PATCH] riscv: locks: introduce ticket-based spinlock
implementation
On Wed, Apr 14, 2021 at 09:08:18AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 14, 2021 at 10:26:57AM +0800, Guo Ren wrote:
> > Thx Peter,
> >
> > On Tue, Apr 13, 2021 at 4:17 PM Peter Zijlstra <peterz@...radead.org> wrote:
> > >
> > > On Tue, Apr 13, 2021 at 10:03:01AM +0200, Peter Zijlstra wrote:
> > >
> > > > For ticket locks you really only needs atomic_fetch_add() and
> > > > smp_store_release() and an architectural guarantees that the
> > > > atomic_fetch_add() has fwd progress under contention and that a sub-word
> > > > store (through smp_store_release()) will fail the SC.
> > > >
> > > > Then you can do something like:
> > > >
> > > > void lock(atomic_t *lock)
> > > > {
> > > > u32 val = atomic_fetch_add(1<<16, lock); /* SC, gives us RCsc */
> > > > u16 ticket = val >> 16;
> > > >
> > > > for (;;) {
> > > > if (ticket == (u16)val)
> > > > break;
> > > > cpu_relax();
> > > > val = atomic_read_acquire(lock);
> > > > }
> > Should it be?
> > for (;;) {
> > if (ticket == (u16)val) {
> > __atomic_acquire_fence();
> > break;
> > }
>
> No, atomic_fetch_add() is full smp_mb(), it even has a comment on that
> says so.
>
> Also, __atomic_acquire_fence() is an implementation detail of atomic,
> and architectures need not provide it. On top of that, IIRC the atomic
> _acquire/_release have RCpc ordering, where we want our locks to have
> RCsc ordering (and very much not weaker than RCtso). Even more so,
> adding barriers to atomics should really not be conditional.
That made me look at the qspinlock code, and queued_spin_*lock() uses
atomic_try_cmpxchg_acquire(), which means any arch that uses qspinlock
and has RCpc atomics will give us massive pain.
Current archs using qspinlock are: x86, arm64, power, sparc64, mips and
openrisc (WTF?!).
Of those, x86 and sparc are TSO archs with SC atomics, arm64 has RCsc
atomics, power has RCtso atomics (and is the arch we all hate for having
RCtso locks).
Now MIPS has all sorts of ill specified barriers, but last time looked
at it it didn't actually use any of that and stuck to using smp_mb(), so
it will have RCsc atomics.
/me goes look at wth openrisc is.. doesn't even appear to have
asm/barrier.h :-/ Looking at wikipedia it also doesn't appear to
actually have hardware ...
I'm thinking openrisc is a prime candidate for this ticket_lock.h we're
all talking about.
Powered by blists - more mailing lists