[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <07a89ee6-2886-65b8-d2cb-ca154f1f1f4f@intel.com>
Date: Thu, 16 Feb 2023 18:26:19 +0100
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Saeed Mahameed <saeed@...nel.org>
CC: "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
"Saeed Mahameed" <saeedm@...dia.com>, <netdev@...r.kernel.org>,
Tariq Toukan <tariqt@...dia.com>, Gal Pressman <gal@...dia.com>
Subject: Re: [net-next 1/9] net/mlx5e: Switch to using napi_build_skb()
From: Saeed Mahameed <saeed@...nel.org>
Date: Wed, 15 Feb 2023 16:09:10 -0800
> From: Tariq Toukan <tariqt@...dia.com>
>
> Use napi_build_skb() which uses NAPI percpu caches to obtain
> skbuff_head instead of inplace allocation.
>
> napi_build_skb() calls napi_skb_cache_get(), which returns a cached
> skb, or allocates a bulk of NAPI_SKB_CACHE_BULK (16) if cache is empty.
>
> Performance test:
> TCP single stream, single ring, single core, default MTU (1500B).
>
> Before: 26.5 Gbits/sec
> After: 30.1 Gbits/sec (+13.6%)
+14%, gosh! Happy to see more and more vendors switching to it, someone
told me back then we have so fast RAM nowadays that it won't make any
sense to directly recycle kmem-cached objects. Maybe it's fast, but
seems like not *so* fast :D
Reviewed-by: Alexander Lobakin <aleksander.lobakin@...el.com>
>
> Signed-off-by: Tariq Toukan <tariqt@...dia.com>
> Reviewed-by: Gal Pressman <gal@...dia.com>
> Signed-off-by: Saeed Mahameed <saeedm@...dia.com>
> ---
> drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> index a9473a51edc1..9ac2c7778b5b 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> @@ -1556,7 +1556,7 @@ struct sk_buff *mlx5e_build_linear_skb(struct mlx5e_rq *rq, void *va,
> u32 frag_size, u16 headroom,
> u32 cqe_bcnt, u32 metasize)
> {
> - struct sk_buff *skb = build_skb(va, frag_size);
> + struct sk_buff *skb = napi_build_skb(va, frag_size);
>
> if (unlikely(!skb)) {
> rq->stats->buff_alloc_err++;
Thanks,
Olek
Powered by blists - more mailing lists