[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89i+POvkrx-RW3WNA2-1oQSdHt2-0sOddQWwtGQkAbW9RFQ@mail.gmail.com>
Date: Wed, 18 Jan 2023 17:05:09 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: netdev@...r.kernel.org, Jakub Kicinski <kuba@...nel.org>,
"David S. Miller" <davem@...emloft.net>, pabeni@...hat.com
Subject: Re: [PATCH net-next V2 2/2] net: kfree_skb_list use kmem_cache_free_bulk
On Fri, Jan 13, 2023 at 2:52 PM Jesper Dangaard Brouer
<brouer@...hat.com> wrote:
>
> The kfree_skb_list function walks SKB (via skb->next) and frees them
> individually to the SLUB/SLAB allocator (kmem_cache). It is more
> efficient to bulk free them via the kmem_cache_free_bulk API.
>
> This patches create a stack local array with SKBs to bulk free while
> walking the list. Bulk array size is limited to 16 SKBs to trade off
> stack usage and efficiency. The SLUB kmem_cache "skbuff_head_cache"
> uses objsize 256 bytes usually in an order-1 page 8192 bytes that is
> 32 objects per slab (can vary on archs and due to SLUB sharing). Thus,
> for SLUB the optimal bulk free case is 32 objects belonging to same
> slab, but runtime this isn't likely to occur.
>
> The expected gain from using kmem_cache bulk alloc and free API
> have been assessed via a microbencmark kernel module[1].
>
> The module 'slab_bulk_test01' results at bulk 16 element:
> kmem-in-loop Per elem: 109 cycles(tsc) 30.532 ns (step:16)
> kmem-bulk Per elem: 64 cycles(tsc) 17.905 ns (step:16)
>
> More detailed description of benchmarks avail in [2].
>
> [1] https://github.com/netoptimizer/prototype-kernel/tree/master/kernel/mm
> [2] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/kfree_skb_list01.org
>
> V2: rename function to kfree_skb_add_bulk.
>
> Reviewed-by: Saeed Mahameed <saeed@...nel.org>
> Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com>
> ---
According to syzbot, this patch causes kernel panics, in IP fragmentation logic.
Can you double check if there is no obvious bug ?
Powered by blists - more mailing lists