lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9b290dd9-3729-2371-b3ad-ac6570279027@redhat.com>
Date:   Wed, 18 Jan 2023 17:42:22 +0100
From:   Jesper Dangaard Brouer <jbrouer@...hat.com>
To:     Eric Dumazet <edumazet@...gle.com>
Cc:     brouer@...hat.com, netdev@...r.kernel.org,
        Jakub Kicinski <kuba@...nel.org>,
        "David S. Miller" <davem@...emloft.net>, pabeni@...hat.com
Subject: Re: [PATCH net-next V2 2/2] net: kfree_skb_list use
 kmem_cache_free_bulk


On 18/01/2023 17.05, Eric Dumazet wrote:
> On Fri, Jan 13, 2023 at 2:52 PM Jesper Dangaard Brouer
> <brouer@...hat.com> wrote:
>>
>> The kfree_skb_list function walks SKB (via skb->next) and frees them
>> individually to the SLUB/SLAB allocator (kmem_cache). It is more
>> efficient to bulk free them via the kmem_cache_free_bulk API.
>>
>> This patches create a stack local array with SKBs to bulk free while
>> walking the list. Bulk array size is limited to 16 SKBs to trade off
>> stack usage and efficiency. The SLUB kmem_cache "skbuff_head_cache"
>> uses objsize 256 bytes usually in an order-1 page 8192 bytes that is
>> 32 objects per slab (can vary on archs and due to SLUB sharing). Thus,
>> for SLUB the optimal bulk free case is 32 objects belonging to same
>> slab, but runtime this isn't likely to occur.
>>
>> The expected gain from using kmem_cache bulk alloc and free API
>> have been assessed via a microbencmark kernel module[1].
>>
>> The module 'slab_bulk_test01' results at bulk 16 element:
>>   kmem-in-loop Per elem: 109 cycles(tsc) 30.532 ns (step:16)
>>   kmem-bulk    Per elem: 64 cycles(tsc) 17.905 ns (step:16)
>>
>> More detailed description of benchmarks avail in [2].
>>
>> [1] https://github.com/netoptimizer/prototype-kernel/tree/master/kernel/mm
>> [2] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/kfree_skb_list01.org
>>
>> V2: rename function to kfree_skb_add_bulk.
>>
>> Reviewed-by: Saeed Mahameed <saeed@...nel.org>
>> Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com>
>> ---
> 
> According to syzbot, this patch causes kernel panics, in IP fragmentation logic.
> 
> Can you double check if there is no obvious bug ?

Do you have a link to the syzbot issue?

--Jesper

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ