lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250916160951.541279-1-edumazet@google.com>
Date: Tue, 16 Sep 2025 16:09:41 +0000
From: Eric Dumazet <edumazet@...gle.com>
To: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, 
	Paolo Abeni <pabeni@...hat.com>
Cc: Simon Horman <horms@...nel.org>, Willem de Bruijn <willemb@...gle.com>, 
	Kuniyuki Iwashima <kuniyu@...gle.com>, David Ahern <dsahern@...nel.org>, netdev@...r.kernel.org, 
	eric.dumazet@...il.com, Eric Dumazet <edumazet@...gle.com>
Subject: [PATCH net-next 00/10] udp: increase RX performance under stress

This series is the result of careful analysis of UDP stack,
to optimize the receive side, especially when under one or several
UDP sockets are receiving a DDOS attack.

I have measured a 47 % increase of throughput when using
IPv6 UDP packets with 120 bytes of payload, under DDOS.

16 cpus are receiving traffic targeting a single socket.

Even after adding NUMA aware drop counters, we were suffering
from false sharing between packet producers and the consumer.

1) First four patches are shrinking struct ipv6_pinfo size
   and reorganize fields to get more efficient TX path.
   They should also benefit TCP, by removing one cache line miss.

2) patches 5 & 6 changes how sk->sk_rmem_alloc is read and updated.
   They reduce reduce spinlock contention on the busylock.

3) Patches 7 & 8 change the ordering of sk_backlog (including
   sk_rmem_alloc) sk_receive_queue and sk_drop_counters for
   better data locality.

4) Patch 9 removes the hashed array of spinlocks in favor of
   a per-udp-socket one.

5) Final patch adopts skb_attempt_defer_free(), after TCP got
   good results with it.


Eric Dumazet (10):
  ipv6: make ipv6_pinfo.saddr_cache a boolean
  ipv6: make ipv6_pinfo.daddr_cache a boolean
  ipv6: np->rxpmtu race annotation
  ipv6: reorganise struct ipv6_pinfo
  udp: refine __udp_enqueue_schedule_skb() test
  udp: update sk_rmem_alloc before busylock acquisition
  net: group sk_backlog and sk_receive_queue
  udp: add udp_drops_inc() helper
  udp: make busylock per socket
  udp: use skb_attempt_defer_free()

 include/linux/ipv6.h             | 37 ++++++++++++-----------
 include/linux/udp.h              |  1 +
 include/net/ip6_route.h          |  8 ++---
 include/net/sock.h               |  4 +--
 include/net/udp.h                |  6 ++++
 net/core/sock.c                  |  1 -
 net/ipv4/udp.c                   | 50 ++++++++++++++------------------
 net/ipv6/af_inet6.c              |  2 +-
 net/ipv6/inet6_connection_sock.c |  2 +-
 net/ipv6/ip6_output.c            |  6 ++--
 net/ipv6/raw.c                   |  2 +-
 net/ipv6/route.c                 |  7 ++---
 net/ipv6/tcp_ipv6.c              |  4 +--
 net/ipv6/udp.c                   |  8 ++---
 14 files changed, 69 insertions(+), 69 deletions(-)

-- 
2.51.0.384.g4c02a37b29-goog


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ