[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250916160951.541279-8-edumazet@google.com>
Date: Tue, 16 Sep 2025 16:09:48 +0000
From: Eric Dumazet <edumazet@...gle.com>
To: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>
Cc: Simon Horman <horms@...nel.org>, Willem de Bruijn <willemb@...gle.com>,
Kuniyuki Iwashima <kuniyu@...gle.com>, David Ahern <dsahern@...nel.org>, netdev@...r.kernel.org,
eric.dumazet@...il.com, Eric Dumazet <edumazet@...gle.com>
Subject: [PATCH net-next 07/10] net: group sk_backlog and sk_receive_queue
UDP receivers suffer from sk_rmem_alloc updates,
currently sharing a cache line with fields that
need to be read-mostly (sock_read_rx group):
1) RFS enabled hosts read sk_napi_id
from __udpv6_queue_rcv_skb().
2) sk->sk_rcvbuf is read from __udp_enqueue_schedule_skb()
/* --- cacheline 3 boundary (192 bytes) --- */
struct {
atomic_t rmem_alloc; /* 0xc0 0x4 */ // Oops
int len; /* 0xc4 0x4 */
struct sk_buff * head; /* 0xc8 0x8 */
struct sk_buff * tail; /* 0xd0 0x8 */
} sk_backlog; /* 0xc0 0x18 */
__u8 __cacheline_group_end__sock_write_rx[0]; /* 0xd8 0 */
__u8 __cacheline_group_begin__sock_read_rx[0]; /* 0xd8 0 */
struct dst_entry * sk_rx_dst; /* 0xd8 0x8 */
int sk_rx_dst_ifindex;/* 0xe0 0x4 */
u32 sk_rx_dst_cookie; /* 0xe4 0x4 */
unsigned int sk_ll_usec; /* 0xe8 0x4 */
unsigned int sk_napi_id; /* 0xec 0x4 */
u16 sk_busy_poll_budget;/* 0xf0 0x2 */
u8 sk_prefer_busy_poll;/* 0xf2 0x1 */
u8 sk_userlocks; /* 0xf3 0x1 */
int sk_rcvbuf; /* 0xf4 0x4 */
struct sk_filter * sk_filter; /* 0xf8 0x8 */
Move sk_error (which is less often dirtied) there.
Alternative would be to cache align sock_read_rx but
this has more implications/risks.
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
---
include/net/sock.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/net/sock.h b/include/net/sock.h
index 0fd465935334160eeda7c1ea608f5d6161f02cb1..867dc44140d4c1b56ecfab1220c81133fe0394a0 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -394,7 +394,6 @@ struct sock {
atomic_t sk_drops;
__s32 sk_peek_off;
- struct sk_buff_head sk_error_queue;
struct sk_buff_head sk_receive_queue;
/*
* The backlog queue is special, it is always used with
@@ -412,6 +411,7 @@ struct sock {
} sk_backlog;
#define sk_rmem_alloc sk_backlog.rmem_alloc
+ struct sk_buff_head sk_error_queue;
__cacheline_group_end(sock_write_rx);
__cacheline_group_begin(sock_read_rx);
--
2.51.0.384.g4c02a37b29-goog
Powered by blists - more mailing lists