lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7af820e0b90848dbac4d3120758b1cf6@HQMAIL105.nvidia.com>
Date:   Thu, 16 Nov 2017 01:31:21 +0000
From:   Daniel Lustig <dlustig@...dia.com>
To:     Boqun Feng <boqun.feng@...il.com>
CC:     Palmer Dabbelt <palmer@...belt.com>,
        "will.deacon@....com" <will.deacon@....com>,
        Arnd Bergmann <arnd@...db.de>, Olof Johansson <olof@...om.net>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "patches@...ups.riscv.org" <patches@...ups.riscv.org>,
        "peterz@...radead.org" <peterz@...radead.org>
Subject: RE: [patches] Re: [PATCH v9 05/12] RISC-V: Atomic and Locking Code

> -----Original Message-----
> From: Boqun Feng [mailto:boqun.feng@...il.com]
> Sent: Wednesday, November 15, 2017 5:19 PM
> To: Daniel Lustig <dlustig@...dia.com>
> Cc: Palmer Dabbelt <palmer@...belt.com>; will.deacon@....com; Arnd
> Bergmann <arnd@...db.de>; Olof Johansson <olof@...om.net>; linux-
> kernel@...r.kernel.org; patches@...ups.riscv.org; peterz@...radead.org
> Subject: Re: [patches] Re: [PATCH v9 05/12] RISC-V: Atomic and Locking Code
> 
> On Wed, Nov 15, 2017 at 11:59:44PM +0000, Daniel Lustig wrote:
> > > On Wed, 15 Nov 2017 10:06:01 PST (-0800), will.deacon@....com wrote:
> > >> On Tue, Nov 14, 2017 at 12:30:59PM -0800, Palmer Dabbelt wrote:
> > >> > On Tue, 24 Oct 2017 07:10:33 PDT (-0700), will.deacon@....com
> wrote:
> > >> >>On Tue, Sep 26, 2017 at 06:56:31PM -0700, Palmer Dabbelt wrote:
> > > >
> > > > Hi Palmer,
> > > >
> > > >> >>+ATOMIC_OPS(add, add, +,  i,      , _relaxed)
> > > >> >>+ATOMIC_OPS(add, add, +,  i, .aq  , _acquire) ATOMIC_OPS(add,
> > > >> >>+add,
> > > >> >>++,  i, .rl  , _release)
> > > >> >>+ATOMIC_OPS(add, add, +,  i, .aqrl,         )
> > > >> >
> > > >> >Have you checked that .aqrl is equivalent to "ordered", since
> > > >> >there are interpretations where that isn't the case. Specifically:
> > > >> >
> > > >> >// all variables zero at start of time
> > > >> >P0:
> > > >> >WRITE_ONCE(x) = 1;
> > > >> >atomic_add_return(y, 1);
> > > >> >WRITE_ONCE(z) = 1;
> > > >> >
> > > >> >P1:
> > > >> >READ_ONCE(z) // reads 1
> > > >> >smp_rmb();
> > > >> >READ_ONCE(x) // must not read 0
> > > >>
> > > >> I haven't.  We don't quite have a formal memory model specification
> yet.
> > > >> I've added Daniel Lustig, who is creating that model.  He should
> > > >> have a better idea
> > > >
> > > > Thanks. You really do need to ensure that, as it's heavily relied upon.
> > >
> > > I know it's the case for our current processors, and I'm pretty sure
> > > it's the case for what's formally specified, but we'll have to wait
> > > for the spec in order to prove it.
> >
> > I think Will is right.  In the current spec, using .aqrl converts an
> > RCpc load or store into an RCsc load or store, but the acquire(-RCsc)
> > annotation still only applies to the load part of the atomic, and the
> > release(-RCsc) annotation applies only to the store part of the atomic.
> >
> > Why is that?  Picture an machine which implements AMOs using something
> > that looks more like an LR/SC under the covers, or one that uses cache
> > line locking, or anything else along those same lines.  In some such
> > machines, there could be a window between lock/reserve and
> > unlock/store-conditional where other later stores could squeeze into, and
> that would break Will's example among others.
> >
> > It's likely the same reasoning that causes ARM to use a trailing dmb
> > here, rather than just using ldaxr/stlxr.  Is that right Will?  I know
> > that's LL/SC and this particular cases uses AMOADD, but it's the same
> > principle.  Well, at least according to how we have it in the current memory
> model draft.
> >
> > Also, RISC-V currently prefers leading fence mappings, so I think the
> > result here, for atomic_add_return() for example, should be this:
> >
> > fence rw,rw
> > amoadd.aq ...
> >
> 
> Hmm.. if atomic_add_return() is implemented like that, how about the
> following case:
> 
> 	{x=0, y=0}
> 
> 	P1:
> 
> 	r1 = atomic_add_return(&x, 1); // r1 == 0, x will 1 afterwards
> 	WRITE_ONCE(y, 1);
> 
> 	P2:
> 
> 	r2 = READ_ONCE(y); // r2 = 1
> 	smp_rmb();
> 	r3 = atomic_read(&x); // r3 = 0?
> 
> , could this result in r1 == 1 && r2 == 1 && r3 == 0? Given you said .aq only
> effects the load part of AMO, and I don't see anything here preventing the
> reordering between store of y and the store part of the AMO on P1.
> 
> Note: we don't allow (r1 == 1 && r2 == 1 && r3 == 0) in above case for linux
> kernel. Please see Documentation/atomic_t.txt:
> 
> "Fully ordered primitives are ordered against everything prior and everything
> subsequent. Therefore a fully ordered primitive is like having an smp_mb()
> before and an smp_mb() after the primitive."

Yes, you're right Boqun.  Good catch, and sorry for over-optimizing too quickly.

In that case, maybe we should just start out having a fence on both sides for
now, and then we'll discuss offline whether we want to change the model's
behavior here.

Dan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ