[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240328144032.1864988-3-edumazet@google.com>
Date: Thu, 28 Mar 2024 14:40:30 +0000
From: Eric Dumazet <edumazet@...gle.com>
To: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>
Cc: Willem de Bruijn <willemb@...gle.com>, netdev@...r.kernel.org, eric.dumazet@...il.com,
Eric Dumazet <edumazet@...gle.com>
Subject: [PATCH net-next 2/4] udp: relax atomic operation on sk->sk_rmem_alloc
atomic_add_return() is more expensive than atomic_add()
and seems overkill in UDP rx fast path.
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
---
net/ipv4/udp.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index f2736e8958187e132ef45d8e25ab2b4ea7bcbc3d..d2fa9755727ce034c2b4bca82bd9e72130d588e6 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1516,12 +1516,7 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
size = skb->truesize;
udp_set_dev_scratch(skb);
- /* we drop only if the receive buf is full and the receive
- * queue contains some other skb
- */
- rmem = atomic_add_return(size, &sk->sk_rmem_alloc);
- if (rmem > (size + (unsigned int)sk->sk_rcvbuf))
- goto uncharge_drop;
+ atomic_add(size, &sk->sk_rmem_alloc);
spin_lock(&list->lock);
err = udp_rmem_schedule(sk, size);
--
2.44.0.396.g6e790dbe36-goog
Powered by blists - more mailing lists