lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <faba0ab9-005b-0228-c652-6574b641665d@huaweicloud.com>
Date:   Fri, 20 Oct 2023 09:07:48 +0800
From:   Hou Tao <houtao@...weicloud.com>
To:     paulmck@...nel.org
Cc:     bpf@...r.kernel.org, David Vernet <void@...ifault.com>,
        Andrii Nakryiko <andrii@...nel.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Martin KaFai Lau <martin.lau@...ux.dev>,
        Song Liu <song@...nel.org>,
        Yonghong Song <yonghong.song@...ux.dev>,
        John Fastabend <john.fastabend@...il.com>,
        KP Singh <kpsingh@...nel.org>,
        Stanislav Fomichev <sdf@...gle.com>,
        Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH bpf] Fold smp_mb__before_atomic() into
 atomic_set_release()

Hi Paul,

On 10/19/2023 10:25 PM, Paul E. McKenney wrote:
> On Thu, Oct 19, 2023 at 02:20:35PM +0800, Hou Tao wrote:
>> Hi Paul,
>>
>> On 10/19/2023 12:54 PM, Paul E. McKenney wrote:
>>> On Thu, Oct 19, 2023 at 09:07:07AM +0800, Hou Tao wrote:
>>>> Hi Paul,
>>>>
>>>> On 10/19/2023 6:28 AM, Paul E. McKenney wrote:
>>>>> bpf: Fold smp_mb__before_atomic() into atomic_set_release()
>>>>>
>>>>> The bpf_user_ringbuf_drain() BPF_CALL function uses an atomic_set()
>>>>> immediately preceded by smp_mb__before_atomic() so as to order storing
>>>>> of ring-buffer consumer and producer positions prior to the atomic_set()
>>>>> call's clearing of the ->busy flag, as follows:
>>>>>
>>>>>         smp_mb__before_atomic();
>>>>>         atomic_set(&rb->busy, 0);
>>>>>
>>>>> Although this works given current architectures and implementations, and
>>>>> given that this only needs to order prior writes against a later write.
>>>>> However, it does so by accident because the smp_mb__before_atomic()
>>>>> is only guaranteed to work with read-modify-write atomic operations,
>>>>> and not at all with things like atomic_set() and atomic_read().
>>>>>
>>>>> Note especially that smp_mb__before_atomic() will not, repeat *not*,
>>>>> order the prior write to "a" before the subsequent non-read-modify-write
>>>>> atomic read from "b", even on strongly ordered systems such as x86:
>>>>>
>>>>>         WRITE_ONCE(a, 1);
>>>>>         smp_mb__before_atomic();
>>>>>         r1 = atomic_read(&b);
>>>> The reason is smp_mb__before_atomic() is defined as noop and
>>>> atomic_read() in x86-64 is just READ_ONCE(), right ?
>>> The real reason is that smp_mb__before_atomic() is not defined to do
>>> anything unless followed by an atomic read-modify-write operation,
>>> and atomic_read(), atomic_64read(), atomic_set(), and so on are not
>>> read-modify-write operations.
>> I see. Thanks for explanation. It seems I did not read
>> Documentation/atomic_t.txt carefully, it said:
>>
>>     The barriers:
>>
>>     smp_mb__{before,after}_atomic()
>>
>>     only apply to the RMW atomic ops and can be used to augment/upgrade the
>>     ordering inherent to the op.
> That is the place!
>
>>> As you point out, one implementation consequence of this is that
>>> smp_mb__before_atomic() is nothingness on x86.
>>>
>>>> And it seems that I also used smp_mb__before_atomic() in a wrong way for
>>>> patch [1]. The memory order in the posted patch is
>>>>
>>>> process X                                    process Y
>>>>     atomic64_dec_and_test(&map->usercnt)
>>>>     READ_ONCE(timer->timer)
>>>>                                             timer->time = t
>>> The above two lines are supposed to be accessing the same field, correct?
>>> If so, process Y's store really should be WRITE_ONCE().
>> Yes. These two processes are accessing the same field (namely
>> timer->timer). Is WRITE_ONCE(xx) still necessary when the write of
>> timer->time in process Y is protected by a spin-lock ?
> If there is any possibility of a concurrent reader, that is, a reader
> not holding that same lock, then yes, you should use WRITE_ONCE().

Got it. Will do.
>
> Compilers can do pretty vicious things to unmarked reads and writes.
> But don't take my word for it, here are a few writeups:
>
> o	"Who's afraid of a big bad optimizing compiler?" (series)
> 	https://lwn.net/Articles/793253, https://lwn.net/Articles/799218
>
> o	"An introduction to lockless algorithms" (Paolo Bonzini series)
> 	https://lwn.net/Articles/844224, https://lwn.net/Articles/846700,
> 	https://lwn.net/Articles/847481, https://lwn.net/Articles/847973,
> 	https://lwn.net/Articles/849237, https://lwn.net/Articles/850202
>
> o	"Is Parallel Programming Hard, And, If So, What Can You Do About It?"
> 	Section 4.3.4 ("Accessing Shared Variables")
> 	https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/
> perfbook.html

Thanks for these excellent articles. Will read these articles carefully
this time.

Regards,
Hou
>
>>>>                                             // it won't work
>>>>                                             smp_mb__before_atomic()
>>>>                                             atomic64_read(&map->usercnt)
>>>>
>>>> For the problem, it seems I need to replace smp_mb__before_atomic() by
>>>> smp_mb() to fix the memory order, right ?
>>> Yes, because smp_mb() will order the prior store against that later load.
>> Thanks. Will fix the patch.
> Very good!
>
> 							Thanx, Paul
>
>> Regards,
>> Hou
>>> 							Thanx, Paul
>>>
>>>> Regards,
>>>> Hou
>>>>
>>>> [1]:
>>>> https://lore.kernel.org/bpf/20231017125717.241101-2-houtao@huaweicloud.com/
>>>>                                                                 
>>>>
>>>>> Therefore, replace the smp_mb__before_atomic() and atomic_set() with
>>>>> atomic_set_release() as follows:
>>>>>
>>>>>         atomic_set_release(&rb->busy, 0);
>>>>>
>>>>> This is no slower (and sometimes is faster) than the original, and also
>>>>> provides a formal guarantee of ordering that the original lacks.
>>>>>
>>>>> Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
>>>>> Acked-by: David Vernet <void@...ifault.com>
>>>>> Cc: Andrii Nakryiko <andrii@...nel.org>
>>>>> Cc: Alexei Starovoitov <ast@...nel.org>
>>>>> Cc: Daniel Borkmann <daniel@...earbox.net>
>>>>> Cc: Martin KaFai Lau <martin.lau@...ux.dev>
>>>>> Cc: Song Liu <song@...nel.org>
>>>>> Cc: Yonghong Song <yonghong.song@...ux.dev>
>>>>> Cc: John Fastabend <john.fastabend@...il.com>
>>>>> Cc: KP Singh <kpsingh@...nel.org>
>>>>> Cc: Stanislav Fomichev <sdf@...gle.com>
>>>>> Cc: Hao Luo <haoluo@...gle.com>
>>>>> Cc: Jiri Olsa <jolsa@...nel.org>
>>>>> Cc: <bpf@...r.kernel.org>
>>>>>
>>>>> diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
>>>>> index f045fde632e5..0ee653a936ea 100644
>>>>> --- a/kernel/bpf/ringbuf.c
>>>>> +++ b/kernel/bpf/ringbuf.c
>>>>> @@ -770,8 +770,7 @@ BPF_CALL_4(bpf_user_ringbuf_drain, struct bpf_map *, map,
>>>>>  	/* Prevent the clearing of the busy-bit from being reordered before the
>>>>>  	 * storing of any rb consumer or producer positions.
>>>>>  	 */
>>>>> -	smp_mb__before_atomic();
>>>>> -	atomic_set(&rb->busy, 0);
>>>>> +	atomic_set_release(&rb->busy, 0);
>>>>>  
>>>>>  	if (flags & BPF_RB_FORCE_WAKEUP)
>>>>>  		irq_work_queue(&rb->work);
>>>>>
>>>>> .

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ