[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0UfEFBOmQJvry0-+hGnoy7jP3U1ZKbP2nk7NYszVU+O==A@mail.gmail.com>
Date: Sat, 13 Feb 2021 12:30:18 -0800
From: Alexander Duyck <alexander.duyck@...il.com>
To: Alexander Lobakin <alobakin@...me>
Cc: "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Jonathan Lemon <jonathan.lemon@...il.com>,
Eric Dumazet <edumazet@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Willem de Bruijn <willemb@...gle.com>,
Randy Dunlap <rdunlap@...radead.org>,
Kevin Hao <haokexin@...il.com>,
Pablo Neira Ayuso <pablo@...filter.org>,
Jakub Sitnicki <jakub@...udflare.com>,
Marco Elver <elver@...gle.com>,
Dexuan Cui <decui@...rosoft.com>,
Paolo Abeni <pabeni@...hat.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Alexander Duyck <alexanderduyck@...com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andriin@...com>,
Taehee Yoo <ap420073@...il.com>, Wei Wang <weiwan@...gle.com>,
Cong Wang <xiyou.wangcong@...il.com>,
Björn Töpel <bjorn@...nel.org>,
Miaohe Lin <linmiaohe@...wei.com>,
Guillaume Nault <gnault@...hat.com>,
Florian Westphal <fw@...len.de>,
Edward Cree <ecree.xilinx@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
Netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH v6 net-next 00/11] skbuff: introduce skbuff_heads bulking
and reusing
On Sat, Feb 13, 2021 at 6:10 AM Alexander Lobakin <alobakin@...me> wrote:
>
> Currently, all sorts of skb allocation always do allocate
> skbuff_heads one by one via kmem_cache_alloc().
> On the other hand, we have percpu napi_alloc_cache to store
> skbuff_heads queued up for freeing and flush them by bulks.
>
> We can use this cache not only for bulk-wiping, but also to obtain
> heads for new skbs and avoid unconditional allocations, as well as
> for bulk-allocating (like XDP's cpumap code and veth driver already
> do).
>
> As this might affect latencies, cache pressure and lots of hardware
> and driver-dependent stuff, this new feature is mostly optional and
> can be issued via:
> - a new napi_build_skb() function (as a replacement for build_skb());
> - existing {,__}napi_alloc_skb() and napi_get_frags() functions;
> - __alloc_skb() with passing SKB_ALLOC_NAPI in flags.
>
> iperf3 showed 35-70 Mbps bumps for both TCP and UDP while performing
> VLAN NAT on 1.2 GHz MIPS board. The boost is likely to be bigger
> on more powerful hosts and NICs with tens of Mpps.
>
> Note on skbuff_heads from distant slabs or pfmemalloc'ed slabs:
> - kmalloc()/kmem_cache_alloc() itself allows by default allocating
> memory from the remote nodes to defragment their slabs. This is
> controlled by sysctl, but according to this, skbuff_head from a
> remote node is an OK case;
> - The easiest way to check if the slab of skbuff_head is remote or
> pfmemalloc'ed is:
>
> if (!dev_page_is_reusable(virt_to_head_page(skb)))
> /* drop it */;
>
> ...*but*, regarding that most slabs are built of compound pages,
> virt_to_head_page() will hit unlikely-branch every single call.
> This check costed at least 20 Mbps in test scenarios and seems
> like it'd be better to _not_ do this.
<snip>
> Alexander Lobakin (11):
> skbuff: move __alloc_skb() next to the other skb allocation functions
> skbuff: simplify kmalloc_reserve()
> skbuff: make __build_skb_around() return void
> skbuff: simplify __alloc_skb() a bit
> skbuff: use __build_skb_around() in __alloc_skb()
> skbuff: remove __kfree_skb_flush()
> skbuff: move NAPI cache declarations upper in the file
> skbuff: introduce {,__}napi_build_skb() which reuses NAPI cache heads
> skbuff: allow to optionally use NAPI cache from __alloc_skb()
> skbuff: allow to use NAPI cache from __napi_alloc_skb()
> skbuff: queue NAPI_MERGED_FREE skbs into NAPI cache instead of freeing
>
> include/linux/skbuff.h | 4 +-
> net/core/dev.c | 16 +-
> net/core/skbuff.c | 428 +++++++++++++++++++++++------------------
> 3 files changed, 242 insertions(+), 206 deletions(-)
>
With the last few changes and testing to verify the need to drop the
cache clearing this patch set looks good to me.
Reviewed-by: Alexander Duyck <alexanderduyck@...com>
Powered by blists - more mailing lists