lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c46f8cfa-056a-059c-a193-376a0d710699@redhat.com>
Date:   Tue, 15 Oct 2019 16:31:04 -0400
From:   Waiman Long <longman@...hat.com>
To:     Manfred Spraul <manfred@...orfullife.com>,
        Peter Zijlstra <peterz@...radead.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Davidlohr Bueso <dave@...olabs.net>, 1vier1@....de,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jonathan Corbet <corbet@....net>
Subject: Re: [PATCH 6/6] Documentation/memory-barriers.txt: Clarify cmpxchg()

On 10/14/19 1:49 PM, Manfred Spraul wrote:
> Hello Peter,
>
> On 10/14/19 3:03 PM, Peter Zijlstra wrote:
>> On Sat, Oct 12, 2019 at 07:49:58AM +0200, Manfred Spraul wrote:
>>> The documentation in memory-barriers.txt claims that
>>> smp_mb__{before,after}_atomic() are for atomic ops that do not return a
>>> value.
>>>
>>> This is misleading and doesn't match the example in atomic_t.txt,
>>> and e.g. smp_mb__before_atomic() may and is used together with
>>> cmpxchg_relaxed() in the wake_q code.
>>>
>>> The purpose of e.g. smp_mb__before_atomic() is to "upgrade" a following
>>> RMW atomic operation to a full memory barrier.
>>> The return code of the atomic operation has no impact, so all of the
>>> following examples are valid:
>> The value return of atomic ops is relevant in so far that
>> (traditionally) all value returning atomic ops already implied full
>> barriers. That of course changed when we added
>> _release/_acquire/_relaxed variants.
> I've updated the Change description accordingly
>>> 1)
>>>     smp_mb__before_atomic();
>>>     atomic_add();
>>>
>>> 2)
>>>     smp_mb__before_atomic();
>>>     atomic_xchg_relaxed();
>>>
>>> 3)
>>>     smp_mb__before_atomic();
>>>     atomic_fetch_add_relaxed();
>>>
>>> Invalid would be:
>>>     smp_mb__before_atomic();
>>>     atomic_set();
>>>
>>> Signed-off-by: Manfred Spraul <manfred@...orfullife.com>
>>> Cc: Waiman Long <longman@...hat.com>
>>> Cc: Davidlohr Bueso <dave@...olabs.net>
>>> Cc: Peter Zijlstra <peterz@...radead.org>
>>> ---
>>>   Documentation/memory-barriers.txt | 11 ++++++-----
>>>   1 file changed, 6 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/Documentation/memory-barriers.txt
>>> b/Documentation/memory-barriers.txt
>>> index 1adbb8a371c7..52076b057400 100644
>>> --- a/Documentation/memory-barriers.txt
>>> +++ b/Documentation/memory-barriers.txt
>>> @@ -1873,12 +1873,13 @@ There are some more advanced barrier functions:
>>>    (*) smp_mb__before_atomic();
>>>    (*) smp_mb__after_atomic();
>>>   -     These are for use with atomic (such as add, subtract,
>>> increment and
>>> -     decrement) functions that don't return a value, especially
>>> when used for
>>> -     reference counting.  These functions do not imply memory
>>> barriers.
>>> +     These are for use with atomic RMW functions (such as add,
>>> subtract,
>>> +     increment, decrement, failed conditional operations, ...) that do
>>> +     not imply memory barriers, but where the code needs a memory
>>> barrier,
>>> +     for example when used for reference counting.
>>>   -     These are also used for atomic bitop functions that do not
>>> return a
>>> -     value (such as set_bit and clear_bit).
>>> +     These are also used for atomic RMW bitop functions that do
>>> imply a full
>> s/do/do not/ ?
> Sorry, yes, of course

I was wondering the same thing. With the revised patch,

Acked-by: Waiman Long <longman@...hat.com>

>>> +     memory barrier (such as set_bit and clear_bit).
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ