[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221107213521.i6qmjut5hdxrrmcs@soft-dev3-1>
Date: Mon, 7 Nov 2022 22:35:21 +0100
From: Horatiu Vultur <horatiu.vultur@...rochip.com>
To: Alexander Lobakin <alexandr.lobakin@...el.com>
CC: <linux-kernel@...r.kernel.org>, <netdev@...r.kernel.org>,
<bpf@...r.kernel.org>, <davem@...emloft.net>,
<edumazet@...gle.com>, <kuba@...nel.org>, <pabeni@...hat.com>,
<ast@...nel.org>, <daniel@...earbox.net>, <hawk@...nel.org>,
<john.fastabend@...il.com>, <linux@...linux.org.uk>
Subject: Re: [PATCH net-next v2 4/4] net: lan96x: Use page_pool API
The 11/07/2022 17:40, Alexander Lobakin wrote:
Hi Olek,
>
> From: Horatiu Vultur <horatiu.vultur@...rochip.com>
> Date: Sun, 6 Nov 2022 22:11:54 +0100
>
> > Use the page_pool API for allocation, freeing and DMA handling instead
> > of dev_alloc_pages, __free_pages and dma_map_page.
> >
> > Signed-off-by: Horatiu Vultur <horatiu.vultur@...rochip.com>
> > ---
> > .../net/ethernet/microchip/lan966x/Kconfig | 1 +
> > .../ethernet/microchip/lan966x/lan966x_fdma.c | 72 ++++++++++---------
> > .../ethernet/microchip/lan966x/lan966x_main.h | 3 +
> > 3 files changed, 43 insertions(+), 33 deletions(-)
>
> [...]
>
> > @@ -84,6 +62,27 @@ static void lan966x_fdma_rx_add_dcb(struct lan966x_rx *rx,
> > rx->last_entry = dcb;
> > }
> >
> > +static int lan966x_fdma_rx_alloc_page_pool(struct lan966x_rx *rx)
> > +{
> > + struct lan966x *lan966x = rx->lan966x;
> > + struct page_pool_params pp_params = {
> > + .order = rx->page_order,
> > + .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
> > + .pool_size = FDMA_DCB_MAX,
> > + .nid = NUMA_NO_NODE,
> > + .dev = lan966x->dev,
> > + .dma_dir = DMA_FROM_DEVICE,
> > + .offset = 0,
> > + .max_len = PAGE_SIZE << rx->page_order,
>
> ::max_len's primary purpose is to save time on DMA syncs.
> First of all, you can substract
> `SKB_DATA_ALIGN(sizeof(struct skb_shared_info))`, your HW never
> writes to those last couple hundred bytes.
> But I suggest calculating ::max_len basing on your current MTU
> value. Let's say you have 16k pages and MTU of 1500, that is a huge
> difference (except your DMA is always coherent, but I assume that's
> not the case).
>
> In lan966x_fdma_change_mtu() you do:
>
> max_mtu = lan966x_fdma_get_max_mtu(lan966x);
> max_mtu += IFH_LEN_BYTES;
> max_mtu += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> max_mtu += VLAN_HLEN * 2;
>
> `lan966x_fdma_get_max_mtu(lan966x) + IFH_LEN_BYTES + VLAN_HLEN * 2`
> (ie 1536 for the MTU of 1500) is your max_len value actually, given
> that you don't reserve any headroom (which is unfortunate, but I
> guess you're working on this already, since XDP requires
> %XDP_PACKET_HEADROOM).
Thanks for the suggestion. I will try it.
Regarding XDP_PACKET_HEADROOM, for the XDP_DROP, I didn't see it to be
needed. Once the support for XDP_TX or XDP_REDIRECT is added, then yes I
need to reserve also the headroom.
>
> > + };
> > +
> > + rx->page_pool = page_pool_create(&pp_params);
> > + if (IS_ERR(rx->page_pool))
> > + return PTR_ERR(rx->page_pool);
> > +
> > + return 0;
>
> return PTR_ERR_OR_ZERO(rx->page_pool);
Yes, I will use this.
>
> > +}
> > +
> > static int lan966x_fdma_rx_alloc(struct lan966x_rx *rx)
> > {
> > struct lan966x *lan966x = rx->lan966x;
>
> [...]
>
> > --
> > 2.38.0
>
> Thanks,
> Olek
--
/Horatiu
Powered by blists - more mailing lists