lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260108102950.49417-1-mahdifrmx@gmail.com>
Date: Thu,  8 Jan 2026 13:59:50 +0330
From: Mahdi Faramarzpour <mahdifrmx@...il.com>
To: netdev@...r.kernel.org
Cc: willemdebruijn.kernel@...il.com,
	davem@...emloft.net,
	dsahern@...nel.org,
	edumazet@...gle.com,
	kuba@...nel.org,
	pabeni@...hat.com,
	horms@...nel.org,
	kshitiz.bartariya@...omail.in,
	Mahdi Faramarzpour <mahdifrmx@...il.com>
Subject: [PATCH net-next] udp: add drop count for packets in udp_prod_queue

This commit adds SNMP drop count increment for the packets in
per NUMA queues which were introduced in commit b650bf0977d3
("udp: remove busylock and add per NUMA queues").

Signed-off-by: Mahdi Faramarzpour <mahdifrmx@...il.com>
---
v4:
  - move all changes to unlikely(to_drop) branch
v3: https://lore.kernel.org/netdev/20260105114732.140719-1-mahdifrmx@gmail.com/
  - remove the unreachable UDP_MIB_RCVBUFERRORS code
v2: https://lore.kernel.org/netdev/20260105071218.10785-1-mahdifrmx@gmail.com/
  - change ENOMEM to ENOBUFS
v1: https://lore.kernel.org/netdev/20260104105732.427691-1-mahdifrmx@gmail.com/
---
 net/ipv4/udp.c | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index ffe074cb5..399d1a357 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1705,6 +1705,10 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
 	unsigned int rmem, rcvbuf;
 	int size, err = -ENOMEM;
 	int total_size = 0;
+	struct {
+		int ipv4;
+		int ipv6;
+	} mem_err_count;
 	int q_size = 0;
 	int dropcount;
 	int nb = 0;
@@ -1793,14 +1797,28 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
 	}
 
 	if (unlikely(to_drop)) {
+		mem_err_count.ipv4 = 0;
+		mem_err_count.ipv6 = 0;
 		for (nb = 0; to_drop != NULL; nb++) {
 			skb = to_drop;
+			if (skb->protocol == htons(ETH_P_IP))
+				mem_err_count.ipv4++;
+			else
+				mem_err_count.ipv6++;
 			to_drop = skb->next;
 			skb_mark_not_on_list(skb);
-			/* TODO: update SNMP values. */
 			sk_skb_reason_drop(sk, skb, SKB_DROP_REASON_PROTO_MEM);
 		}
 		numa_drop_add(&udp_sk(sk)->drop_counters, nb);
+
+		SNMP_ADD_STATS(__UDPX_MIB(sk, true), UDP_MIB_MEMERRORS,
+			       mem_err_count.ipv4);
+		SNMP_ADD_STATS(__UDPX_MIB(sk, true), UDP_MIB_INERRORS,
+			       mem_err_count.ipv4);
+		SNMP_ADD_STATS(__UDPX_MIB(sk, false), UDP_MIB_MEMERRORS,
+			       mem_err_count.ipv6);
+		SNMP_ADD_STATS(__UDPX_MIB(sk, false), UDP_MIB_INERRORS,
+			       mem_err_count.ipv6);
 	}
 
 	atomic_sub(total_size, &udp_prod_queue->rmem_alloc);
-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ