lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190329220554.GD4102@linux.ibm.com>
Date:   Fri, 29 Mar 2019 15:05:54 -0700
From:   "Paul E. McKenney" <paulmck@...ux.ibm.com>
To:     "H. Peter Anvin" <hpa@...or.com>
Cc:     Alexander Potapenko <glider@...gle.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Dmitriy Vyukov <dvyukov@...gle.com>,
        James Y Knight <jyknight@...gle.com>
Subject: Re: Potentially missing "memory" clobbers in bitops.h for x86

On Fri, Mar 29, 2019 at 02:51:26PM -0700, H. Peter Anvin wrote:
> On 3/29/19 2:09 PM, Paul E. McKenney wrote:
> >>
> >> Note: the atomic versions of these functions obviously need to have
> >> "volatile" and the clobber anyway, as they are by definition barriers
> >> and moving memory operations around them would be a very serious error.
> > 
> > The atomic functions that return void don't need to order anything except
> > the input and output arguments.  The oddness with clear_bit() is that the
> > memory changed isn't necessarily the quantity referenced by the argument,
> > if the number of bits specified is large.
> > 
> > So (for example) atomic_inc() does not need a "memory" clobber, right?
> 
> I don't believe that is true: the code calling it has a reasonable
> expectation that previous memory operations have finished and later
> memory operations have not started from the point of view of another
> processor. You are more of an expert on memory ordering than I am, but
> I'm 89% sure that there is plenty of code in the kernel which makes that
> assumption.

>From Documentation/core-api/atomic_ops.rst:

------------------------------------------------------------------------
	void atomic_add(int i, atomic_t *v);
	void atomic_sub(int i, atomic_t *v);
	void atomic_inc(atomic_t *v);
	void atomic_dec(atomic_t *v);

These four routines add and subtract integral values to/from the given
atomic_t value.  The first two routines pass explicit integers by
which to make the adjustment, whereas the latter two use an implicit
adjustment value of "1".

One very important aspect of these two routines is that they DO NOT
require any explicit memory barriers.  They need only perform the
atomic_t counter update in an SMP safe manner.
------------------------------------------------------------------------

So, no, these functions do not imply any ordering other than to the
variable modified.  This one predates my joining the Linux kernel
community.  ;-)  So any cases where someone is relying on atomic_inc()
to provide ordering are bugs.

Now for value-returning atomics, for example, atomic_inc_return(),
full ordering is indeed required.

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ