lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250331103416.7b76c83c@kernel.org>
Date: Mon, 31 Mar 2025 10:34:16 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Taehee Yoo <ap420073@...il.com>
Cc: davem@...emloft.net, pabeni@...hat.com, edumazet@...gle.com,
 andrew+netdev@...n.ch, horms@...nel.org, michael.chan@...adcom.com,
 pavan.chebbi@...adcom.com, ilias.apalodimas@...aro.org, dw@...idwei.uk,
 netdev@...r.kernel.org, kuniyu@...zon.com, sdf@...ichev.me,
 aleksander.lobakin@...el.com
Subject: Re: [RFC net-next 1/2] eth: bnxt: refactor buffer descriptor

On Mon, 31 Mar 2025 11:47:28 +0000 Taehee Yoo wrote:
> There are two kinds of buffer descriptors in bnxt, struct
> bnxt_sw_rx_bd and struct bnxt_sw_rx_agg_bd.(+ struct bnxt_tpa_info).
> The bnxt_sw_rx_bd is the bd for ring buffer, the bnxt_sw_rx_agg_bd is
> the bd for the aggregation ring buffer. The purpose of these bd are the
> same, but the structure is a little bit different.
> 
> struct bnxt_sw_rx_bd {
>         void *data;
>         u8 *data_ptr;
>         dma_addr_t mapping;
> };
> 
> struct bnxt_sw_rx_agg_bd {
>         struct page *page;
>         unsigned int offset;
>         dma_addr_t mapping;
> }
> 
> bnxt_sw_rx_bd->data would be either page pointer or page_address(page) +
> offset. Under page mode(xdp is set), data indicates page pointer,
> if not, it indicates virtual address.
> Before the recent head_pool work from Jakub, bnxt_sw_rx_bd->data was
> allocated by kmalloc().
> But after Jakub's work, bnxt_sw_rx_bd->data is allocated by page_pool.
> So, there is no reason to still keep handling virtual address anymore.
> The goal of this patch is to make bnxt_sw_rx_bd the same as
> the bnxt_sw_rx_agg_bd.
> By this change, we can easily use page_pool API like
> page_pool_dma_sync_for_{cpu | device}()
> Also, we can convert from page to the netmem very smoothly by this change.

LGTM, could you split this into two patches, tho?
One for the BD change and one for the syncing changes?

> -	dma_sync_single_for_device(&pdev->dev, mapping, bp->rx_copybreak,
> -				   bp->rx_dir);
> -
> +	page_pool_dma_sync_for_device(rxr->head_pool, page_to_netmem(page),
> +				      bp->rx_dma_offset, bp->rx_copybreak);

I think we should add a separate helper for this instead of extending
the existing page_pool_dma_sync_for_device(). Let's call it
page_pool_dma_sync_for_device_frag() ?

The use case here is that the driver recycles a frag directly, rather
than following the normal PP recycling path.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ