lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 13 Feb 2021 12:30:18 -0800
From:   Alexander Duyck <>
To:     Alexander Lobakin <>
Cc:     "David S. Miller" <>,
        Jakub Kicinski <>,
        Jonathan Lemon <>,
        Eric Dumazet <>,
        Dmitry Vyukov <>,
        Willem de Bruijn <>,
        Randy Dunlap <>,
        Kevin Hao <>,
        Pablo Neira Ayuso <>,
        Jakub Sitnicki <>,
        Marco Elver <>,
        Dexuan Cui <>,
        Paolo Abeni <>,
        Jesper Dangaard Brouer <>,
        Alexander Duyck <>,
        Alexei Starovoitov <>,
        Daniel Borkmann <>,
        Andrii Nakryiko <>,
        Taehee Yoo <>, Wei Wang <>,
        Cong Wang <>,
        Björn Töpel <>,
        Miaohe Lin <>,
        Guillaume Nault <>,
        Florian Westphal <>,
        Edward Cree <>,
        LKML <>,
        Netdev <>
Subject: Re: [PATCH v6 net-next 00/11] skbuff: introduce skbuff_heads bulking
 and reusing

On Sat, Feb 13, 2021 at 6:10 AM Alexander Lobakin <> wrote:
> Currently, all sorts of skb allocation always do allocate
> skbuff_heads one by one via kmem_cache_alloc().
> On the other hand, we have percpu napi_alloc_cache to store
> skbuff_heads queued up for freeing and flush them by bulks.
> We can use this cache not only for bulk-wiping, but also to obtain
> heads for new skbs and avoid unconditional allocations, as well as
> for bulk-allocating (like XDP's cpumap code and veth driver already
> do).
> As this might affect latencies, cache pressure and lots of hardware
> and driver-dependent stuff, this new feature is mostly optional and
> can be issued via:
>  - a new napi_build_skb() function (as a replacement for build_skb());
>  - existing {,__}napi_alloc_skb() and napi_get_frags() functions;
>  - __alloc_skb() with passing SKB_ALLOC_NAPI in flags.
> iperf3 showed 35-70 Mbps bumps for both TCP and UDP while performing
> VLAN NAT on 1.2 GHz MIPS board. The boost is likely to be bigger
> on more powerful hosts and NICs with tens of Mpps.
> Note on skbuff_heads from distant slabs or pfmemalloc'ed slabs:
>  - kmalloc()/kmem_cache_alloc() itself allows by default allocating
>    memory from the remote nodes to defragment their slabs. This is
>    controlled by sysctl, but according to this, skbuff_head from a
>    remote node is an OK case;
>  - The easiest way to check if the slab of skbuff_head is remote or
>    pfmemalloc'ed is:
>         if (!dev_page_is_reusable(virt_to_head_page(skb)))
>                 /* drop it */;
>    ...*but*, regarding that most slabs are built of compound pages,
>    virt_to_head_page() will hit unlikely-branch every single call.
>    This check costed at least 20 Mbps in test scenarios and seems
>    like it'd be better to _not_ do this.


> Alexander Lobakin (11):
>   skbuff: move __alloc_skb() next to the other skb allocation functions
>   skbuff: simplify kmalloc_reserve()
>   skbuff: make __build_skb_around() return void
>   skbuff: simplify __alloc_skb() a bit
>   skbuff: use __build_skb_around() in __alloc_skb()
>   skbuff: remove __kfree_skb_flush()
>   skbuff: move NAPI cache declarations upper in the file
>   skbuff: introduce {,__}napi_build_skb() which reuses NAPI cache heads
>   skbuff: allow to optionally use NAPI cache from __alloc_skb()
>   skbuff: allow to use NAPI cache from __napi_alloc_skb()
>   skbuff: queue NAPI_MERGED_FREE skbs into NAPI cache instead of freeing
>  include/linux/skbuff.h |   4 +-
>  net/core/dev.c         |  16 +-
>  net/core/skbuff.c      | 428 +++++++++++++++++++++++------------------
>  3 files changed, 242 insertions(+), 206 deletions(-)

With the last few changes and testing to verify the need to drop the
cache clearing this patch set looks good to me.

Reviewed-by: Alexander Duyck <>

Powered by blists - more mailing lists