lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANn89iLATYmTgvxxLjv9nQ1opGVqDZpYfxc64qsk0H0sUQvEWw@mail.gmail.com>
Date: Sat, 20 Sep 2025 12:38:35 -0700
From: Eric Dumazet <edumazet@...gle.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: "David S . Miller" <davem@...emloft.net>, Paolo Abeni <pabeni@...hat.com>, 
	Simon Horman <horms@...nel.org>, Willem de Bruijn <willemb@...gle.com>, 
	Kuniyuki Iwashima <kuniyu@...gle.com>, netdev@...r.kernel.org, eric.dumazet@...il.com
Subject: Re: [PATCH v2 net-next] udp: remove busylock and add per NUMA queues

On Sat, Sep 20, 2025 at 11:11 AM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Sat, 20 Sep 2025 08:02:27 +0000 Eric Dumazet wrote:
> > busylock was protecting UDP sockets against packet floods,
> > but unfortunately was not protecting the host itself.
> >
> > Under stress, many cpus could spin while acquiring the busylock,
> > and NIC had to drop packets. Or packets would be dropped
> > in cpu backlog if RPS/RFS were in place.
> >
> > This patch replaces the busylock by intermediate
> > lockless queues. (One queue per NUMA node).
> >
> > This means that fewer number of cpus have to acquire
> > the UDP receive queue lock.
> >
> > Most of the cpus can either:
> > - immediately drop the packet.
> > - or queue it in their NUMA aware lockless queue.
> >
> > Then one of the cpu is chosen to process this lockless queue
> > in a batch.
> >
> > The batch only contains packets that were cooked on the same
> > NUMA node, thus with very limited latency impact.
>
> Occasionally hitting a UaF like this:
> https://netdev-3.bots.linux.dev/vmksft-net-dbg/results/306342/3-fcnal-ipv6-sh/stderr
> decoded:
> https://netdev-3.bots.linux.dev/vmksft-net-dbg/results/306342/vm-crash-thr2-0
> --
> pw-bot: cr

Yeah, destroy is called while there are packets in flight, from inet_release()

I have to hook the  kfree(up->udp_prod_queue) calls in udp_destruct_common()

I will test:

diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index fedc939342f3d1ab580548e2b4dd39b5e3a1c397..59bf422151171330b7190523e0f287947409b6b5
100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1808,6 +1808,7 @@ void udp_destruct_common(struct sock *sk)
                kfree_skb(skb);
        }
        udp_rmem_release(sk, total, 0, true);
+       kfree(up->udp_prod_queue);
 }
 EXPORT_IPV6_MOD_GPL(udp_destruct_common);

@@ -2912,7 +2913,6 @@ void udp_destroy_sock(struct sock *sk)
                        udp_tunnel_cleanup_gro(sk);
                }
        }
-       kfree(up->udp_prod_queue);
 }

 typedef struct sk_buff *(*udp_gro_receive_t)(struct sock *sk,
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index 90e2945e6cf9066bc36c57cbb29b8aa68e7afe4e..813a2ba75824d14631642bf6973f65063b2825cb
100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -1829,7 +1829,6 @@ void udpv6_destroy_sock(struct sock *sk)
                        udp_tunnel_cleanup_gro(sk);
                }
        }
-       kfree(up->udp_prod_queue);
 }

 /*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ