lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a53a3ff6-8c66-07c4-0163-e582d88843dd@linux.dev>
Date: Sun, 8 Oct 2023 14:59:51 +0800
From: Yajun Deng <yajun.deng@...ux.dev>
To: Eric Dumazet <edumazet@...gle.com>
Cc: davem@...emloft.net, kuba@...nel.org, pabeni@...hat.com,
 netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
 Alexander Lobakin <aleksander.lobakin@...el.com>
Subject: Re: [PATCH net-next v7] net/core: Introduce netdev_core_stats_inc()


On 2023/10/8 14:45, Eric Dumazet wrote:
> On Sat, Oct 7, 2023 at 8:34 AM Yajun Deng <yajun.deng@...ux.dev> wrote:
>>
>> On 2023/10/7 13:29, Eric Dumazet wrote:
>>> On Sat, Oct 7, 2023 at 7:06 AM Yajun Deng <yajun.deng@...ux.dev> wrote:
>>>> Although there is a kfree_skb_reason() helper function that can be used to
>>>> find the reason why this skb is dropped, but most callers didn't increase
>>>> one of rx_dropped, tx_dropped, rx_nohandler and rx_otherhost_dropped.
>>>>
>>> ...
>>>
>>>> +
>>>> +void netdev_core_stats_inc(struct net_device *dev, u32 offset)
>>>> +{
>>>> +       /* This READ_ONCE() pairs with the write in netdev_core_stats_alloc() */
>>>> +       struct net_device_core_stats __percpu *p = READ_ONCE(dev->core_stats);
>>>> +       unsigned long *field;
>>>> +
>>>> +       if (unlikely(!p))
>>>> +               p = netdev_core_stats_alloc(dev);
>>>> +
>>>> +       if (p) {
>>>> +               field = (unsigned long *)((void *)this_cpu_ptr(p) + offset);
>>>> +               WRITE_ONCE(*field, READ_ONCE(*field) + 1);
>>> This is broken...
>>>
>>> As I explained earlier, dev_core_stats_xxxx(dev) can be called from
>>> many different contexts:
>>>
>>> 1) process contexts, where preemption and migration are allowed.
>>> 2) interrupt contexts.
>>>
>>> Adding WRITE_ONCE()/READ_ONCE() is not solving potential races.
>>>
>>> I _think_ I already gave you how to deal with this ?
>>
>> Yes, I replied in v6.
>>
>> https://lore.kernel.org/all/e25b5f3c-bd97-56f0-de86-b93a3172870d@linux.dev/
>>
>>> Please try instead:
>>>
>>> +void netdev_core_stats_inc(struct net_device *dev, u32 offset)
>>> +{
>>> +       /* This READ_ONCE() pairs with the write in netdev_core_stats_alloc() */
>>> +       struct net_device_core_stats __percpu *p = READ_ONCE(dev->core_stats);
>>> +       unsigned long __percpu *field;
>>> +
>>> +       if (unlikely(!p)) {
>>> +               p = netdev_core_stats_alloc(dev);
>>> +               if (!p)
>>> +                       return;
>>> +       }
>>> +       field = (__force unsigned long __percpu *)((__force void *)p + offset);
>>> +       this_cpu_inc(*field);
>>> +}
>>
>> This wouldn't trace anything even the rx_dropped is in increasing. It
>> needs to add an extra operation, such as:
> I honestly do not know what you are talking about.
>
> Have you even tried to change your patch to use
>
> field = (__force unsigned long __percpu *)((__force void *)p + offset);
> this_cpu_inc(*field);


Yes, I tested this code. But the following couldn't show anything even 
if the rx_dropped is increasing.

'sudo python3 /usr/share/bcc/tools/trace netdev_core_stats_inc'

It needs to add anything else. The above command will show correctly.

>
> Instead of the clearly buggy code you had instead :
>
>      field = (unsigned long *)((void *)this_cpu_ptr(p) + offset);
>       WRITE_ONCE(*field, READ_ONCE(*field) + 1);
>
> If your v7 submission was ok for tracing what you wanted,
> I fail to see why a v8 with 3 lines changed would not work.


Me too.

If I add a pr_info in your code, the kprobe will be ok.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ