[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9cd1e24a4fb22136caaeecb2eb81d7652e6dd220.camel@redhat.com>
Date: Wed, 21 Sep 2022 20:10:31 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: netdev <netdev@...r.kernel.org>,
Alexander Duyck <alexander.duyck@...il.com>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>
Subject: Re: [PATCH net-next] net: skb: introduce and use a single page frag
cache
On Wed, 2022-09-21 at 10:18 -0700, Eric Dumazet wrote:
> On Wed, Sep 21, 2022 at 9:42 AM Paolo Abeni <pabeni@...hat.com> wrote:
> >
> > After commit 3226b158e67c ("net: avoid 32 x truesize under-estimation
> > for tiny skbs") we are observing 10-20% regressions in performance
> > tests with small packets. The perf trace points to high pressure on
> > the slab allocator.
> >
> > This change tries to improve the allocation schema for small packets
> > using an idea originally suggested by Eric: a new per CPU page frag is
> > introduced and used in __napi_alloc_skb to cope with small allocation
> > requests.
> >
> > To ensure that the above does not lead to excessive truesize
> > underestimation, the frag size for small allocation is inflated to 1K
> > and all the above is restricted to build with 4K page size.
> >
> > Note that we need to update accordingly the run-time check introduced
> > with commit fd9ea57f4e95 ("net: add napi_get_frags_check() helper").
> >
> > Alex suggested a smart page refcount schema to reduce the number
> > of atomic operations and deal properly with pfmemalloc pages.
> >
> > Under small packet UDP flood, I measure a 15% peak tput increases.
> >
> > Suggested-by: Eric Dumazet <eric.dumazet@...il.com>
> > Suggested-by: Alexander H Duyck <alexander.duyck@...il.com>
> > Signed-off-by: Paolo Abeni <pabeni@...hat.com>
> > ---
> > @Eric, @Alex please let me know if you are comfortable with the
> > attribution
> > ---
> > include/linux/netdevice.h | 1 +
> > net/core/dev.c | 17 ------
> > net/core/skbuff.c | 115 +++++++++++++++++++++++++++++++++++++-
> > 3 files changed, 113 insertions(+), 20 deletions(-)
> >
> > diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> > index 9f42fc871c3b..a1938560192a 100644
> > --- a/include/linux/netdevice.h
> > +++ b/include/linux/netdevice.h
> > @@ -3822,6 +3822,7 @@ void netif_receive_skb_list(struct list_head *head);
> > gro_result_t napi_gro_receive(struct napi_struct *napi, struct sk_buff *skb);
> > void napi_gro_flush(struct napi_struct *napi, bool flush_old);
> > struct sk_buff *napi_get_frags(struct napi_struct *napi);
> > +void napi_get_frags_check(struct napi_struct *napi);
> > gro_result_t napi_gro_frags(struct napi_struct *napi);
> > struct packet_offload *gro_find_receive_by_type(__be16 type);
> > struct packet_offload *gro_find_complete_by_type(__be16 type);
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index d66c73c1c734..fa53830d0683 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -6358,23 +6358,6 @@ int dev_set_threaded(struct net_device *dev, bool threaded)
> > }
> > EXPORT_SYMBOL(dev_set_threaded);
> >
> > -/* Double check that napi_get_frags() allocates skbs with
> > - * skb->head being backed by slab, not a page fragment.
> > - * This is to make sure bug fixed in 3226b158e67c
> > - * ("net: avoid 32 x truesize under-estimation for tiny skbs")
> > - * does not accidentally come back.
> > - */
> > -static void napi_get_frags_check(struct napi_struct *napi)
> > -{
> > - struct sk_buff *skb;
> > -
> > - local_bh_disable();
> > - skb = napi_get_frags(napi);
> > - WARN_ON_ONCE(skb && skb->head_frag);
> > - napi_free_frags(napi);
> > - local_bh_enable();
> > -}
> > -
> > void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi,
> > int (*poll)(struct napi_struct *, int), int weight)
> > {
> > diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> > index f1b8b20fc20b..2be11b487df1 100644
> > --- a/net/core/skbuff.c
> > +++ b/net/core/skbuff.c
> > @@ -134,8 +134,73 @@ static void skb_under_panic(struct sk_buff *skb, unsigned int sz, void *addr)
> > #define NAPI_SKB_CACHE_BULK 16
> > #define NAPI_SKB_CACHE_HALF (NAPI_SKB_CACHE_SIZE / 2)
> >
> > +/* the compiler doesn't like 'SKB_TRUESIZE(GRO_MAX_HEAD) > 512', but we
> > + * can imply such condition checking the double word and MAX_HEADER size
> > + */
> > +#if PAGE_SIZE == SZ_4K && (defined(CONFIG_64BIT) || MAX_HEADER > 64)
> > +
> > +#define NAPI_HAS_SMALL_PAGE_FRAG 1
> > +
> > +/* specializzed page frag allocator using a single order 0 page
> > + * and slicing it into 1K sized fragment. Constrained to system
> > + * with:
> > + * - a very limited amount of 1K fragments fitting a single
> > + * page - to avoid excessive truesize underestimation
> > + * - reasonably high truesize value for napi_get_frags()
> > + * allocation - to avoid memory usage increased compared
> > + * to kalloc, see __napi_alloc_skb()
> > + *
> > + */
> > +struct page_frag_1k {
> > + void *va;
> > + u16 offset;
> > + bool pfmemalloc;
> > +};
> > +
> > +static void *page_frag_alloc_1k(struct page_frag_1k *nc, gfp_t gfp)
> > +{
> > + struct page *page;
> > + int offset;
> > +
> > + if (likely(nc->va)) {
> > + offset = nc->offset - SZ_1K;
> > + if (likely(offset >= 0))
> > + goto out;
> > +
> > + put_page(virt_to_page(nc->va));
>
> This probably can be removed, if the page_ref_add() later is adjusted by one ?
I think you are right. It looks like we never touch the page after the
last fragment is used. One less atomic operation :) And one less cold
cacheline accessed.
I read the above as you are somewhat ok with the overall size and
number of conditionals in this change, am I guessing too much?
Thanks!
Paolo
Powered by blists - more mailing lists