[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y7h/6RwHW2IU3dq3@x130>
Date: Fri, 6 Jan 2023 12:09:13 -0800
From: Saeed Mahameed <saeed@...nel.org>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: netdev@...r.kernel.org, Jakub Kicinski <kuba@...nel.org>,
"David S. Miller" <davem@...emloft.net>, edumazet@...gle.com,
pabeni@...hat.com
Subject: Re: [PATCH net-next 2/2] net: kfree_skb_list use kmem_cache_free_bulk
On 05 Jan 16:42, Jesper Dangaard Brouer wrote:
>The kfree_skb_list function walks SKB (via skb->next) and frees them
>individually to the SLUB/SLAB allocator (kmem_cache). It is more
>efficient to bulk free them via the kmem_cache_free_bulk API.
>
>This patches create a stack local array with SKBs to bulk free while
>walking the list. Bulk array size is limited to 16 SKBs to trade off
>stack usage and efficiency. The SLUB kmem_cache "skbuff_head_cache"
>uses objsize 256 bytes usually in an order-1 page 8192 bytes that is
>32 objects per slab (can vary on archs and due to SLUB sharing). Thus,
>for SLUB the optimal bulk free case is 32 objects belonging to same
>slab, but runtime this isn't likely to occur.
>
>Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com>
any performance numbers ?
LGTM,
Reviewed-by: Saeed Mahameed <saeed@...nel.org>
Powered by blists - more mailing lists