lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <CANn89iL-zUw1FqjYRSC7BGB0hfQ5uKpJzUba3YFd--c=GdOoGg@mail.gmail.com> Date: Sat, 7 Oct 2023 07:29:24 +0200 From: Eric Dumazet <edumazet@...gle.com> To: Yajun Deng <yajun.deng@...ux.dev> Cc: davem@...emloft.net, kuba@...nel.org, pabeni@...hat.com, netdev@...r.kernel.org, linux-kernel@...r.kernel.org, Alexander Lobakin <aleksander.lobakin@...el.com> Subject: Re: [PATCH net-next v7] net/core: Introduce netdev_core_stats_inc() On Sat, Oct 7, 2023 at 7:06 AM Yajun Deng <yajun.deng@...ux.dev> wrote: > > Although there is a kfree_skb_reason() helper function that can be used to > find the reason why this skb is dropped, but most callers didn't increase > one of rx_dropped, tx_dropped, rx_nohandler and rx_otherhost_dropped. > ... > + > +void netdev_core_stats_inc(struct net_device *dev, u32 offset) > +{ > + /* This READ_ONCE() pairs with the write in netdev_core_stats_alloc() */ > + struct net_device_core_stats __percpu *p = READ_ONCE(dev->core_stats); > + unsigned long *field; > + > + if (unlikely(!p)) > + p = netdev_core_stats_alloc(dev); > + > + if (p) { > + field = (unsigned long *)((void *)this_cpu_ptr(p) + offset); > + WRITE_ONCE(*field, READ_ONCE(*field) + 1); This is broken... As I explained earlier, dev_core_stats_xxxx(dev) can be called from many different contexts: 1) process contexts, where preemption and migration are allowed. 2) interrupt contexts. Adding WRITE_ONCE()/READ_ONCE() is not solving potential races. I _think_ I already gave you how to deal with this ? Please try instead: +void netdev_core_stats_inc(struct net_device *dev, u32 offset) +{ + /* This READ_ONCE() pairs with the write in netdev_core_stats_alloc() */ + struct net_device_core_stats __percpu *p = READ_ONCE(dev->core_stats); + unsigned long __percpu *field; + + if (unlikely(!p)) { + p = netdev_core_stats_alloc(dev); + if (!p) + return; + } + field = (__force unsigned long __percpu *)((__force void *)p + offset); + this_cpu_inc(*field); +}
Powered by blists - more mailing lists