[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <05b086b3-c82d-4aba-b185-2d39ba968a72@redhat.com>
Date: Fri, 23 Jan 2026 16:25:21 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>,
Mahdi Faramarzpour <mahdifrmx@...il.com>, netdev@...r.kernel.org
Cc: davem@...emloft.net, dsahern@...nel.org, edumazet@...gle.com,
kuba@...nel.org, horms@...nel.org, kshitiz.bartariya@...omail.in
Subject: Re: [PATCH net-next] udp: add drop count for packets in
udp_prod_queue
On 1/23/26 3:41 PM, Willem de Bruijn wrote:
> Paolo Abeni wrote:
>> I think that doing the SNMP accounting in __udp_enqueue_schedule_skb()
>> (for `to_drop`), __udp_queue_rcv_skb() and __udpv6_queue_rcv_skb() (for
>> `skb`) is a little confusing and possible error prone in the long run.
>>
>> I'm wondering if something alike the following (completely untested, not
>> even built! just to give the idea) would be better?
>
> I don't see the error prone issue with the simpler patch.
> But SGTM if you prefer this.
It's not a big deal but if we need to update the UDP MIB someday having
to look in different places could cause missing something.
The code I proposed could be made smaller with something alike the
following (still completely untested), which in turn looks quite alike
what this patch is heading, I guess. So really it's not a big deal.
---
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 1db63db7e5d4..431de8dda0d3 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1793,6 +1795,7 @@ int __udp_enqueue_schedule_skb(struct sock *sk,
struct sk_buff *skb)
}
}
+ atomic_sub(total_size, &udp_prod_queue->rmem_alloc);
if (unlikely(to_drop)) {
for (nb = 0; to_drop != NULL; nb++) {
skb = to_drop;
@@ -1802,10 +1805,9 @@ int __udp_enqueue_schedule_skb(struct sock *sk,
struct sk_buff *skb)
sk_skb_reason_drop(sk, skb, SKB_DROP_REASON_PROTO_MEM);
}
numa_drop_add(&udp_sk(sk)->drop_counters, nb);
+ return nb;
}
- atomic_sub(total_size, &udp_prod_queue->rmem_alloc);
-
return 0;
drop:
@@ -2345,6 +2347,9 @@ static int __udp_queue_rcv_skb(struct sock *sk,
struct sk_buff *skb)
}
rc = __udp_enqueue_schedule_skb(sk, skb);
+ if (likely(!rc))
+ return 0;
+
if (rc < 0) {
int is_udplite = IS_UDPLITE(sk);
int drop_reason;
@@ -2365,6 +2370,9 @@ static int __udp_queue_rcv_skb(struct sock *sk,
struct sk_buff *skb)
return -1;
}
+ /* rc > 0, packets dropped after dequeueing from prod_queue */
+ SNMP_ADD_STATS(__UDPX_MIB(sk, true), UDP_MIB_MEMERRORS, rc);
+ SNMP_ADD_STATS(__UDPX_MIB(sk, true), UDP_MIB_INERRORS, rc);
return 0;
}
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index 010b909275dd..df895096669e 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -793,6 +793,9 @@ static int __udpv6_queue_rcv_skb(struct sock *sk,
struct sk_buff *skb)
}
rc = __udp_enqueue_schedule_skb(sk, skb);
+ if (likely(!rc))
+ return 0;
+
if (rc < 0) {
int is_udplite = IS_UDPLITE(sk);
enum skb_drop_reason drop_reason;
@@ -813,6 +816,9 @@ static int __udpv6_queue_rcv_skb(struct sock *sk,
struct sk_buff *skb)
return -1;
}
+ /* rc > 0, packets dropped after dequeueing from prod_queue */
+ SNMP_ADD_STATS(__UDPX_MIB(sk, false), UDP_MIB_MEMERRORS, rc);
+ SNMP_ADD_STATS(__UDPX_MIB(sk, false), UDP_MIB_INERRORS, rc);
return 0;
}
Powered by blists - more mailing lists