lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 20 Nov 2017 14:29:26 +0000
From:   Will Deacon <will.deacon@....com>
To:     Greentime Hu <green.hu@...il.com>
Cc:     greentime@...estech.com, linux-kernel@...r.kernel.org,
        arnd@...db.de, linux-arch@...r.kernel.org, tglx@...utronix.de,
        jason@...edaemon.net, marc.zyngier@....com, robh+dt@...nel.org,
        netdev@...r.kernel.org, Vincent Chen <vincentc@...estech.com>,
        peterz@...radead.org, paulmck@...ux.vnet.ibm.com
Subject: Re: [PATCH 11/31] nds32: Atomic operations

Hi Greentime,

On Wed, Nov 08, 2017 at 01:54:59PM +0800, Greentime Hu wrote:
> From: Greentime Hu <greentime@...estech.com>
> 
> Signed-off-by: Vincent Chen <vincentc@...estech.com>
> Signed-off-by: Greentime Hu <greentime@...estech.com>
> ---
>  arch/nds32/include/asm/futex.h    |  116 ++++++++++++++++++++++++
>  arch/nds32/include/asm/spinlock.h |  178 +++++++++++++++++++++++++++++++++++++
>  2 files changed, 294 insertions(+)
>  create mode 100644 arch/nds32/include/asm/futex.h
>  create mode 100644 arch/nds32/include/asm/spinlock.h

[...]

> +static inline int
> +futex_atomic_cmpxchg_inatomic(u32 * uval, u32 __user * uaddr,
> +			      u32 oldval, u32 newval)
> +{
> +	int ret = 0;
> +	u32 val, tmp, flags;
> +
> +	if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
> +		return -EFAULT;
> +
> +	smp_mb();
> +	asm volatile ("       movi    $ta, #0\n"
> +		      "1:     llw     %1, [%6 + $ta]\n"
> +		      "       sub     %3, %1, %4\n"
> +		      "       cmovz   %2, %5, %3\n"
> +		      "       cmovn   %2, %1, %3\n"
> +		      "2:     scw     %2, [%6 + $ta]\n"
> +		      "       beqz    %2, 1b\n"
> +		      "3:\n                   " __futex_atomic_ex_table("%7")
> +		      :"+&r"(ret), "=&r"(val), "=&r"(tmp), "=&r"(flags)
> +		      :"r"(oldval), "r"(newval), "r"(uaddr), "i"(-EFAULT)
> +		      :"$ta", "memory");
> +	smp_mb();
> +
> +	*uval = val;
> +	return ret;
> +}

I see you rely on asm-generic/barrier.h for your barrier definitions, which
suggests that you only need to prevent reordering by the compiler because
you're not SMP. Is that right? If so, using smp_mb() is a little weird.

What about DMA transactions? I imagine you might need some extra
instructions for the mandatory barriers there.

Also:

> +static inline void arch_spin_lock(arch_spinlock_t * lock)
> +{
> +	unsigned long tmp;
> +
> +	__asm__ __volatile__("1:\n"
> +			     "\tllw\t%0, [%1]\n"
> +			     "\tbnez\t%0, 1b\n"
> +			     "\tmovi\t%0, #0x1\n"
> +			     "\tscw\t%0, [%1]\n"
> +			     "\tbeqz\t%0, 1b\n"
> +			     :"=&r"(tmp)
> +			     :"r"(&lock->lock)
> +			     :"memory");
> +}

Here it looks like you're eliding an explicit barrier here because you
already have a "memory" clobber. Can't you do the same for the futex code
above?

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ