[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180917081755.GO24124@hirez.programming.kicks-ass.net>
Date: Mon, 17 Sep 2018 10:17:55 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Guo Ren <ren_guo@...ky.com>
Cc: linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
tglx@...utronix.de, daniel.lezcano@...aro.org,
jason@...edaemon.net, arnd@...db.de, devicetree@...r.kernel.org,
andrea.parri@...rulasolutions.com, c-sky_gcc_upstream@...ky.com,
gnu-csky@...tor.com, thomas.petazzoni@...tlin.com,
wbx@...ibc-ng.org, green.hu@...il.com
Subject: Re: [PATCH V3 11/27] csky: Atomic operations
On Sat, Sep 15, 2018 at 10:55:13PM +0800, Guo Ren wrote:
> > > +#define ATOMIC_OP_RETURN(op, c_op) \
> > > +#define ATOMIC_FETCH_OP(op, c_op) \
> > For these you could generate _relaxed variants and not provide smp_mb()
> > inside them.
> Ok, but I'll modify it in next commit.
That's fine. Just wanted to let you know about _relaxed() since it will
benefit your platform.
> > > +#define ATOMIC_OP(op, c_op) \
> > > +static inline void atomic_##op(int i, atomic_t *v) \
> > > +{ \
> > > + unsigned long tmp, flags; \
> > > + \
> > > + raw_local_irq_save(flags); \
> > > + \
> > > + asm volatile ( \
> > > + " ldw %0, (%2) \n" \
> > > + " " #op " %0, %1 \n" \
> > > + " stw %0, (%2) \n" \
> > > + : "=&r" (tmp) \
> > > + : "r" (i), "r"(&v->counter) \
> > > + : "memory"); \
> > > + \
> > > + raw_local_irq_restore(flags); \
> > > +}
> >
> > Is this really 'better' than the generic UP fallback implementation?
> There is a lock irq instruction "idly4" with out irq_save. eg:
> asm volatile ( \
> " idly4 \n" \
> " ldw %0, (%2) \n" \
> " " #op " %0, %1 \n" \
> " stw %0, (%2) \n" \
> I'll change to that after full tested.
That is pretty nifty, could you explain (or reference me to a arch doc
that does) the exact semantics of that "idly4" instruction?
> > > +static inline void arch_spin_lock(arch_spinlock_t *lock)
> > > +{
> > > + arch_spinlock_t lockval;
> > > + u32 ticket_next = 1 << TICKET_NEXT;
> > > + u32 *p = &lock->lock;
> > > + u32 tmp;
> > > +
> > > + smp_mb();
> >
> > spin_lock() doesn't need smp_mb() before.
> read_lock and write_lock also needn't smp_mb() before, isn't it?
Correct. The various *_lock() functions only need imply an ACQUIRE
barrier, such that the critical section happens after the lock is taken.
> > > +
> > > +static inline void arch_spin_unlock(arch_spinlock_t *lock)
> > > +{
> > > + smp_mb();
> > > + lock->tickets.owner++;
> > > + smp_mb();
> >
> > spin_unlock() doesn't need smp_mb() after.
> read_unlock and write_unlock also needn't smp_mb() after, isn't it?
Indeed so, the various *_unlock() functions only need imply a RELEASE
barrier, such that the critical section happend before the lock is
released.
In both cases (lock and unlock) there is a great amount of subtle
details, but most of that is irrelevant if all you have is smp_mb().
> > > +/*
> > > + * Test-and-set spin-locking.
> > > + */
> >
> > Why retain that?
> >
> > same comments; it has far too many smp_mb()s in.
> I'm not sure about queued_rwlocks and just for 2-cores-smp test-and-set is
> faster and simpler, isn't it?
Even on 2 cores I think you can create starvation cases with
test-and-set spinlocks. And the maintenace overhead of carrying two lock
implementations is non trivial.
As to performance; I cannot say, but the ticket lock isn't very
expensive, you could benchmark of course.
Powered by blists - more mailing lists