[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7904adc0b3ab1c6b4bc328b0509435c9d38fc98a.camel@redhat.com>
Date: Fri, 16 Feb 2024 17:59:01 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Eric Dumazet <edumazet@...gle.com>, "David S . Miller"
<davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>
Cc: netdev@...r.kernel.org, eric.dumazet@...il.com, Neal Cardwell
<ncardwell@...gle.com>, Naman Gulati <namangulati@...gle.com>, Coco Li
<lixiaoyan@...gle.com>, Wei Wang <weiwan@...gle.com>, Jon Maloy
<jmaloy@...hat.com>
Subject: Re: [PATCH net-next] net: reorganize "struct sock" fields
On Fri, 2024-02-16 at 16:20 +0000, Eric Dumazet wrote:
> Last major reorg happened in commit 9115e8cd2a0c ("net: reorganize
> struct sock for better data locality")
>
> Since then, many changes have been done.
>
> Before SO_PEEK_OFF support is added to TCP, we need
> to move sk_peek_off to a better location.
>
> It is time to make another pass, and add six groups,
> without explicit alignment.
>
> - sock_write_rx (following sk_refcnt) read-write fields in rx path.
> - sock_read_rx read-mostly fields in rx path.
> - sock_read_rxtx read-mostly fields in both rx and tx paths.
> - sock_write_rxtx read-write fields in both rx and tx paths.
> - sock_write_tx read-write fields in tx paths.
> - sock_read_tx read-mostly fields in tx paths.
>
> Results on TCP_RR benchmarks seem to show a gain (4 to 5 %).
>
> It is possible UDP needs a change, because sk_peek_off
> shares a cache line with sk_receive_queue.
Yes, I think we need to touch UDP.
> If this the case, we can exchange roles of sk->sk_receive
> and up->reader_queue queues.
That option looks quite invasive and possibly error prone to me. What
about adding a 'peeking_with_offset' flag nearby up->reader_queue, set
it via an udp specific set_peek_off(), and test such flag in
udp_recvmsg() before accessing sk->sk_peek_off?
> After this change, we have the following layout:
Looks great!
Acked-by: Paolo Abeni <pabeni@...hat.com>
I'll try to run some benchmarks when time allows ;)
Many thanks!
Paolo
Powered by blists - more mailing lists