lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 7 Jul 2009 19:28:11 -0400
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Oleg Nesterov <oleg@...hat.com>, Jiri Olsa <jolsa@...hat.com>,
	Ingo Molnar <mingo@...e.hu>, netdev@...r.kernel.org,
	linux-kernel@...r.kernel.org, fbl@...hat.com, nhorman@...hat.com,
	davem@...hat.com, htejun@...il.com, jarkao2@...il.com,
	davidel@...ilserver.org
Subject: Re: [PATCHv5 2/2] memory barrier: adding smp_mb__after_lock

* Eric Dumazet (eric.dumazet@...il.com) wrote:
> Mathieu Desnoyers a écrit :
> > * Peter Zijlstra (a.p.zijlstra@...llo.nl) wrote:
> >> On Tue, 2009-07-07 at 17:44 +0200, Oleg Nesterov wrote:
> >>> On 07/07, Mathieu Desnoyers wrote:
> >>>> Actually, thinking about it more, to appropriately support x86, as well
> >>>> as powerpc, arm and mips, we would need something like:
> >>>>
> >>>> read_lock_smp_mb()
> >>>>
> >>>> Which would be a read_lock with an included memory barrier.
> >>> Then we need read_lock_irq_smp_mb, read_lock_irqsave__smp_mb, write_lock_xxx,
> >>> otherwise it is not clear why only read_lock() has _smp_mb() version.
> >>>
> >>> The same for spin_lock_xxx...
> >> At which time the smp_mb__{before,after}_{un,}lock become attractive
> >> again.
> >>
> > 
> > Then having a new __read_lock() (without acquire semantic) which would
> > be required to be followed by a smp__mb_after_lock() would make sense. I
> > think this would fit all of x86, powerpc, arm, mips without having to
> > create tons of new primitives. Only "simpler" ones that clearly separate
> > locking from memory barriers.
> > 
> 
> Hmm... On x86, read_lock() is :
> 
> 	lock subl $0x1,(%eax)
> 	jns   .Lok
> 	call	__read_lock_failed
> .Lok:	ret
> 
> 
> What would be __read_lock() ? I cant see how it could *not* use lock prefix
> actually and or being cheaper...
> 

(I'll use read_lock_noacquire() instead of __read_lock() because
__read_lock() is already used for low-level primitives and will produce
name clashes. But I recognise that noacquire is just an ugly name.)

Here, a __read_lock_noacquire _must_ be followed by a
smp__mb_after_lock(), and a __read_unlock_norelease() _must_ be
preceded by a smp__mb_before_unlock().

x86 :

#define __read_lock_noacquire	read_lock
/* Assumes all __*_lock_noacquire primitives act as a full smp_mb() */
#define smp__mb_after_lock()

/* Assumes all __*_unlock_norelease primitives act as a full smp_mb() */
#define smp__mb_before_unlock()
#define __read_unlock_norelease	read_unlock

it's that easy :-)


however, on powerpc, we have to stop and think about it a bit more:

quoting http://www.linuxjournal.com/article/8212

"lwsync, or lightweight sync, orders loads with respect to subsequent
loads and stores, and it also orders stores. However, it does not order
stores with respect to subsequent loads. Interestingly enough, the
lwsync instruction enforces the same ordering as does the zSeries and,
coincidentally, the SPARC TSO."

static inline long __read_trylock_noacquire(raw_rwlock_t *rw)
{
        long tmp;

        __asm__ __volatile__(
"1:     lwarx           %0,0,%1\n"
        __DO_SIGN_EXTEND
"       addic.          %0,%0,1\n\
        ble-            2f\n"
        PPC405_ERR77(0,%1)
"       stwcx.          %0,0,%1\n\
        bne-            1b\n\
        /* isync\n\ Removed the isync because following smp_mb (sync
         * instruction) includes a core synchronizing barrier. */
2:"     : "=&r" (tmp)
        : "r" (&rw->lock)
        : "cr0", "xer", "memory");

        return tmp;
}

#define smp__mb_after_lock()	smp_mb()


#define smp__mb_before_unlock()	smp_mb()

static inline void __raw_read_unlock_norelease(raw_rwlock_t *rw)
{
        long tmp;

        __asm__ __volatile__(
        "# read_unlock\n\t"
        /* LWSYNC_ON_SMP -------- can be removed, replace by prior
         * smp_mb() */
"1:     lwarx           %0,0,%1\n\
        addic           %0,%0,-1\n"
        PPC405_ERR77(0,%1)
"       stwcx.          %0,0,%1\n\
        bne-            1b"
        : "=&r"(tmp)
        : "r"(&rw->lock)
        : "cr0", "xer", "memory");
}

I assume here that lwarx/stwcx pairs for different addresses cannot be
reordered with other pairs. If they can, then we already have a problem
with the current powerpc read lock implementation.

I just wrote this as an example to show how this could become a
performance improvement on architectures different than x86. The code
proposed above comes without warranty and should be tested with care. :)

Mathieu

-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ