[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aXfN+/v8JAh9GcQU@devvm11784.nha0.facebook.com>
Date: Mon, 26 Jan 2026 12:26:35 -0800
From: Bobby Eshleman <bobbyeshleman@...il.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: "David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>, netdev@...r.kernel.org,
eric.dumazet@...il.com
Subject: Re: [PATCH net-next 4/4] net: inline get_netmem() and put_netmem()
On Thu, Jan 22, 2026 at 04:57:19AM +0000, Eric Dumazet wrote:
> These helpers are used in network fast paths.
>
> Only call out-of-line helpers for netmem case.
>
> We might consider inlining __get_netmem() and __put_netmem()
> in the future.
>
> $ scripts/bloat-o-meter -t vmlinux.3 vmlinux.4
> add/remove: 6/6 grow/shrink: 22/1 up/down: 2614/-646 (1968)
> Function old new delta
> pskb_carve 1669 1894 +225
> gro_pull_from_frag0 - 206 +206
> get_page 190 380 +190
> skb_segment 3561 3747 +186
> put_page 595 765 +170
> skb_copy_ubufs 1683 1822 +139
> __pskb_trim_head 276 401 +125
> __pskb_copy_fclone 734 858 +124
> skb_zerocopy 1092 1215 +123
> pskb_expand_head 892 1008 +116
> skb_split 828 940 +112
> skb_release_data 297 409 +112
> ___pskb_trim 829 941 +112
> __skb_zcopy_downgrade_managed 120 226 +106
> tcp_clone_payload 530 634 +104
> esp_ssg_unref 191 294 +103
> dev_gro_receive 1464 1514 +50
> __put_netmem - 41 +41
> __get_netmem - 41 +41
> skb_shift 1139 1175 +36
> skb_try_coalesce 681 714 +33
> __pfx_put_page 112 144 +32
> __pfx_get_page 32 64 +32
> __pskb_pull_tail 1137 1168 +31
> veth_xdp_get 250 267 +17
> __pfx_gro_pull_from_frag0 - 16 +16
> __pfx___put_netmem - 16 +16
> __pfx___get_netmem - 16 +16
> __pfx_put_netmem 16 - -16
> __pfx_gro_try_pull_from_frag0 16 - -16
> __pfx_get_netmem 16 - -16
> put_netmem 114 - -114
> get_netmem 130 - -130
> napi_gro_frags 929 771 -158
> gro_try_pull_from_frag0 196 - -196
> Total: Before=22565857, After=22567825, chg +0.01%
>
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> ---
> include/net/netmem.h | 20 ++++++++++++++++++--
> net/core/skbuff.c | 31 ++++++++++---------------------
> 2 files changed, 28 insertions(+), 23 deletions(-)
>
> diff --git a/include/net/netmem.h b/include/net/netmem.h
> index 2113a197abb315f608ee3d6d3e8a60811b3781f8..a96b3e5e5574c1800ae7949c39366968707ab5d5 100644
> --- a/include/net/netmem.h
> +++ b/include/net/netmem.h
> @@ -401,8 +401,24 @@ static inline bool net_is_devmem_iov(const struct net_iov *niov)
> }
> #endif
>
> -void get_netmem(netmem_ref netmem);
> -void put_netmem(netmem_ref netmem);
> +void __get_netmem(netmem_ref netmem);
> +void __put_netmem(netmem_ref netmem);
> +
> +static __always_inline void get_netmem(netmem_ref netmem)
> +{
> + if (netmem_is_net_iov(netmem))
> + __get_netmem(netmem);
> + else
> + get_page(netmem_to_page(netmem));
> +}
> +
> +static __always_inline void put_netmem(netmem_ref netmem)
> +{
> + if (netmem_is_net_iov(netmem))
> + __put_netmem(netmem);
> + else
> + put_page(netmem_to_page(netmem));
> +}
>
> #define netmem_dma_unmap_addr_set(NETMEM, PTR, ADDR_NAME, VAL) \
> do { \
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index c57c806edba8524d3d498800e61ae6901fbfe5fb..2a8235c3d6f7fcee8b0b28607c10db985965b8d4 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -7422,31 +7422,20 @@ bool csum_and_copy_from_iter_full(void *addr, size_t bytes,
> }
> EXPORT_SYMBOL(csum_and_copy_from_iter_full);
>
> -void get_netmem(netmem_ref netmem)
> +void __get_netmem(netmem_ref netmem)
> {
> - struct net_iov *niov;
> + struct net_iov *niov = netmem_to_net_iov(netmem);
>
> - if (netmem_is_net_iov(netmem)) {
> - niov = netmem_to_net_iov(netmem);
> - if (net_is_devmem_iov(niov))
> - net_devmem_get_net_iov(netmem_to_net_iov(netmem));
> - return;
> - }
> - get_page(netmem_to_page(netmem));
> + if (net_is_devmem_iov(niov))
> + net_devmem_get_net_iov(netmem_to_net_iov(netmem));
I wonder if this is a good time to move to just re-use niov from above
here instead of re-convert with netmem_to_net_iov()? I do acknowledge
the original code did not do this.
Best,
Bobby
Powered by blists - more mailing lists