[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMArcTVRQnxrLnXPQp4D+F1TMS7SeQLtWTsxaLt8vJ7Gdqfj=Q@mail.gmail.com>
Date: Tue, 1 Apr 2025 15:48:11 +0900
From: Taehee Yoo <ap420073@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, pabeni@...hat.com, edumazet@...gle.com,
andrew+netdev@...n.ch, horms@...nel.org, michael.chan@...adcom.com,
pavan.chebbi@...adcom.com, ilias.apalodimas@...aro.org, dw@...idwei.uk,
netdev@...r.kernel.org, kuniyu@...zon.com, sdf@...ichev.me,
aleksander.lobakin@...el.com
Subject: Re: [RFC net-next 1/2] eth: bnxt: refactor buffer descriptor
On Tue, Apr 1, 2025 at 2:34 AM Jakub Kicinski <kuba@...nel.org> wrote:
>
Hi Jakub,
Thanks a lot for the review!
> On Mon, 31 Mar 2025 11:47:28 +0000 Taehee Yoo wrote:
> > There are two kinds of buffer descriptors in bnxt, struct
> > bnxt_sw_rx_bd and struct bnxt_sw_rx_agg_bd.(+ struct bnxt_tpa_info).
> > The bnxt_sw_rx_bd is the bd for ring buffer, the bnxt_sw_rx_agg_bd is
> > the bd for the aggregation ring buffer. The purpose of these bd are the
> > same, but the structure is a little bit different.
> >
> > struct bnxt_sw_rx_bd {
> > void *data;
> > u8 *data_ptr;
> > dma_addr_t mapping;
> > };
> >
> > struct bnxt_sw_rx_agg_bd {
> > struct page *page;
> > unsigned int offset;
> > dma_addr_t mapping;
> > }
> >
> > bnxt_sw_rx_bd->data would be either page pointer or page_address(page) +
> > offset. Under page mode(xdp is set), data indicates page pointer,
> > if not, it indicates virtual address.
> > Before the recent head_pool work from Jakub, bnxt_sw_rx_bd->data was
> > allocated by kmalloc().
> > But after Jakub's work, bnxt_sw_rx_bd->data is allocated by page_pool.
> > So, there is no reason to still keep handling virtual address anymore.
> > The goal of this patch is to make bnxt_sw_rx_bd the same as
> > the bnxt_sw_rx_agg_bd.
> > By this change, we can easily use page_pool API like
> > page_pool_dma_sync_for_{cpu | device}()
> > Also, we can convert from page to the netmem very smoothly by this change.
>
> LGTM, could you split this into two patches, tho?
> One for the BD change and one for the syncing changes?
Okay, I will split this patch.
>
> > - dma_sync_single_for_device(&pdev->dev, mapping, bp->rx_copybreak,
> > - bp->rx_dir);
> > -
> > + page_pool_dma_sync_for_device(rxr->head_pool, page_to_netmem(page),
> > + bp->rx_dma_offset, bp->rx_copybreak);
>
> I think we should add a separate helper for this instead of extending
> the existing page_pool_dma_sync_for_device(). Let's call it
> page_pool_dma_sync_for_device_frag() ?
Thanks. I will add a separate helper,
the page_pool_dma_sync_for_device_frag().
>
> The use case here is that the driver recycles a frag directly, rather
> than following the normal PP recycling path.
I'm not sure that I understand this use case correctly.
I think it's like when a packet is dropped or fully
copied by rx-copy-break, a driver can reuse frag directly.
To reuse a frag in a driver directly, a driver can use
page_pool_dma_sync_for_device_frag() before setting ring buffer.
If I misunderstand, please let me know.
Thanks a lot!
Taehee Yoo
Powered by blists - more mailing lists