[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CD3717A9-E52B-425C-99B4-96ABD942E626@zytor.com>
Date: Fri, 29 Mar 2019 15:30:57 -0700
From: hpa@...or.com
To: paulmck@...ux.ibm.com, "Paul E. McKenney" <paulmck@...ux.ibm.com>
CC: Alexander Potapenko <glider@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Dmitriy Vyukov <dvyukov@...gle.com>,
James Y Knight <jyknight@...gle.com>
Subject: Re: Potentially missing "memory" clobbers in bitops.h for x86
On March 29, 2019 3:05:54 PM PDT, "Paul E. McKenney" <paulmck@...ux.ibm.com> wrote:
>On Fri, Mar 29, 2019 at 02:51:26PM -0700, H. Peter Anvin wrote:
>> On 3/29/19 2:09 PM, Paul E. McKenney wrote:
>> >>
>> >> Note: the atomic versions of these functions obviously need to
>have
>> >> "volatile" and the clobber anyway, as they are by definition
>barriers
>> >> and moving memory operations around them would be a very serious
>error.
>> >
>> > The atomic functions that return void don't need to order anything
>except
>> > the input and output arguments. The oddness with clear_bit() is
>that the
>> > memory changed isn't necessarily the quantity referenced by the
>argument,
>> > if the number of bits specified is large.
>> >
>> > So (for example) atomic_inc() does not need a "memory" clobber,
>right?
>>
>> I don't believe that is true: the code calling it has a reasonable
>> expectation that previous memory operations have finished and later
>> memory operations have not started from the point of view of another
>> processor. You are more of an expert on memory ordering than I am,
>but
>> I'm 89% sure that there is plenty of code in the kernel which makes
>that
>> assumption.
>
>>From Documentation/core-api/atomic_ops.rst:
>
>------------------------------------------------------------------------
> void atomic_add(int i, atomic_t *v);
> void atomic_sub(int i, atomic_t *v);
> void atomic_inc(atomic_t *v);
> void atomic_dec(atomic_t *v);
>
>These four routines add and subtract integral values to/from the given
>atomic_t value. The first two routines pass explicit integers by
>which to make the adjustment, whereas the latter two use an implicit
>adjustment value of "1".
>
>One very important aspect of these two routines is that they DO NOT
>require any explicit memory barriers. They need only perform the
>atomic_t counter update in an SMP safe manner.
>------------------------------------------------------------------------
>
>So, no, these functions do not imply any ordering other than to the
>variable modified. This one predates my joining the Linux kernel
>community. ;-) So any cases where someone is relying on atomic_inc()
>to provide ordering are bugs.
>
>Now for value-returning atomics, for example, atomic_inc_return(),
>full ordering is indeed required.
>
> Thanx, Paul
Ok.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
Powered by blists - more mailing lists