[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210323170447.78d65d05@carbon>
Date: Tue, 23 Mar 2021 17:04:47 +0100
From: Jesper Dangaard Brouer <jbrouer@...hat.com>
To: Ilias Apalodimas <ilias.apalodimas@...aro.org>
Cc: Alexander Lobakin <alobakin@...me>,
Matteo Croce <mcroce@...ux.microsoft.com>,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
Jonathan Lemon <jonathan.lemon@...il.com>,
"David S. Miller" <davem@...emloft.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Lorenzo Bianconi <lorenzo@...nel.org>,
Saeed Mahameed <saeedm@...dia.com>,
David Ahern <dsahern@...il.com>,
Saeed Mahameed <saeed@...nel.org>, Andrew Lunn <andrew@...n.ch>
Subject: Re: [PATCH net-next 0/6] page_pool: recycle buffers
On Tue, 23 Mar 2021 17:47:46 +0200
Ilias Apalodimas <ilias.apalodimas@...aro.org> wrote:
> On Tue, Mar 23, 2021 at 03:41:23PM +0000, Alexander Lobakin wrote:
> > From: Matteo Croce <mcroce@...ux.microsoft.com>
> > Date: Mon, 22 Mar 2021 18:02:55 +0100
> >
> > > From: Matteo Croce <mcroce@...rosoft.com>
> > >
> > > This series enables recycling of the buffers allocated with the page_pool API.
> > > The first two patches are just prerequisite to save space in a struct and
> > > avoid recycling pages allocated with other API.
> > > Patch 2 was based on a previous idea from Jonathan Lemon.
> > >
> > > The third one is the real recycling, 4 fixes the compilation of __skb_frag_unref
> > > users, and 5,6 enable the recycling on two drivers.
> > >
> > > In the last two patches I reported the improvement I have with the series.
> > >
> > > The recycling as is can't be used with drivers like mlx5 which do page split,
> > > but this is documented in a comment.
> > > In the future, a refcount can be used so to support mlx5 with no changes.
> > >
> > > Ilias Apalodimas (2):
> > > page_pool: DMA handling and allow to recycles frames via SKB
> > > net: change users of __skb_frag_unref() and add an extra argument
> > >
> > > Jesper Dangaard Brouer (1):
> > > xdp: reduce size of struct xdp_mem_info
> > >
> > > Matteo Croce (3):
> > > mm: add a signature in struct page
> > > mvpp2: recycle buffers
> > > mvneta: recycle buffers
> > >
> > > .../chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 2 +-
> > > drivers/net/ethernet/marvell/mvneta.c | 4 +-
> > > .../net/ethernet/marvell/mvpp2/mvpp2_main.c | 17 +++----
> > > drivers/net/ethernet/marvell/sky2.c | 2 +-
> > > drivers/net/ethernet/mellanox/mlx4/en_rx.c | 2 +-
> > > include/linux/mm_types.h | 1 +
> > > include/linux/skbuff.h | 33 +++++++++++--
> > > include/net/page_pool.h | 15 ++++++
> > > include/net/xdp.h | 5 +-
> > > net/core/page_pool.c | 47 +++++++++++++++++++
> > > net/core/skbuff.c | 20 +++++++-
> > > net/core/xdp.c | 14 ++++--
> > > net/tls/tls_device.c | 2 +-
> > > 13 files changed, 138 insertions(+), 26 deletions(-)
> >
> > Just for the reference, I've performed some tests on 1G SoC NIC with
> > this patchset on, here's direct link: [0]
> >
>
> Thanks for the testing!
> Any chance you can get a perf measurement on this?
I guess you mean perf-report (--stdio) output, right?
> Is DMA syncing taking a substantial amount of your cpu usage?
(+1 this is an important question)
> >
> > [0] https://lore.kernel.org/netdev/20210323153550.130385-1-alobakin@pm.me
> >
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists