lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 28 Jan 2021 12:33:25 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Alexander Sverdlin <alexander.sverdlin@...ia.com>
Cc:     Paul Burton <paul.burton@...tec.com>, linux-mips@...r.kernel.org,
        Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
        Will Deacon <will@...nel.org>,
        Boqun Feng <boqun.feng@...il.com>,
        Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/6] MIPS: Octeon: Implement __smp_store_release()

On Thu, Jan 28, 2021 at 08:27:29AM +0100, Alexander Sverdlin wrote:

> >> +#define __smp_store_release(p, v)					\
> >> +do {									\
> >> +	compiletime_assert_atomic_type(*p);				\
> >> +	__smp_wmb();							\
> >> +	__smp_rmb();							\
> >> +	WRITE_ONCE(*p, v);						\
> >> +} while (0)
> > This is wrong in general since smp_rmb() will only provide order between
> > two loads and smp_store_release() is a store.
> > 
> > If this is correct for all MIPS, this needs a giant comment on exactly
> > how that smp_rmb() makes sense here.
> 
> ... the macro is provided for Octeon only, and __smp_rmb() is actually a NOP
> there, but I thought to "document" the flow of thoughts from the discussion
> above by including it anyway.

Random discussions on the internet do not absolve you from having to
write coherent comments. Especially so where memory ordering is
concerned.

This, from commit 6b07d38aaa52 ("MIPS: Octeon: Use optimized memory
barrier primitives."):

	#define smp_mb__before_llsc() smp_wmb()
	#define __smp_mb__before_llsc() __smp_wmb()

is also dodgy as hell and really wants a comment too. I'm not buying the
Changelog of that commit either, __smp_mb__before_llsc should also
ensure the LL cannot happen earlier, but SYNCW has no effect on loads.
So what stops the load from being speculated?


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ