lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4bac1eca-007c-4df2-9b35-d9ce5b787410@intel.com>
Date: Mon, 14 Jul 2025 16:35:26 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Jacob Keller <jacob.e.keller@...el.com>
CC: Michal Kubiak <michal.kubiak@...el.com>,
	<intel-wired-lan@...ts.osuosl.org>, <maciej.fijalkowski@...el.com>,
	<larysa.zaremba@...el.com>, <netdev@...r.kernel.org>,
	<przemyslaw.kitszel@...el.com>, <anthony.l.nguyen@...el.com>
Subject: Re: [Intel-wired-lan] [PATCH iwl-next 0/3] ice: convert Rx path to
 Page Pool

From: Jacob Keller <jacob.e.keller@...el.com>
Date: Thu, 10 Jul 2025 15:43:20 -0700

> 
> 
> On 7/7/2025 4:36 PM, Jacob Keller wrote:

[...]

> I got this to work with the following diff:
> 
> diff --git i/drivers/net/ethernet/intel/ice/ice_txrx.h
> w/drivers/net/ethernet/intel/ice/ice_txrx.h
> index 42e74925b9df..6b72608a20ab 100644
> --- i/drivers/net/ethernet/intel/ice/ice_txrx.h
> +++ w/drivers/net/ethernet/intel/ice/ice_txrx.h
> @@ -342,7 +342,6 @@ struct ice_rx_ring {
>         struct ice_tx_ring *xdp_ring;
>         struct ice_rx_ring *next;       /* pointer to next ring in
> q_vector */
>         struct xsk_buff_pool *xsk_pool;
> -       u32 nr_frags;
>         u16 rx_buf_len;
>         dma_addr_t dma;                 /* physical address of ring */
>         u8 dcb_tc;                      /* Traffic class of ring */
> diff --git i/drivers/net/ethernet/intel/ice/ice_txrx.c
> w/drivers/net/ethernet/intel/ice/ice_txrx.c
> index 062291dac99c..403b5c54fd2a 100644
> --- i/drivers/net/ethernet/intel/ice/ice_txrx.c
> +++ w/drivers/net/ethernet/intel/ice/ice_txrx.c
> @@ -831,8 +831,7 @@ static int ice_clean_rx_irq(struct ice_rx_ring
> *rx_ring, int budget)
> 
>                 /* retrieve a buffer from the ring */
>                 rx_buf = &rx_ring->rx_fqes[ntc];
> -               if (!libeth_xdp_process_buff(xdp, rx_buf, size))
> -                       break;
> +               libeth_xdp_process_buff(xdp, rx_buf, size);
> 
>                 if (++ntc == cnt)
>                         ntc = 0;
> @@ -852,25 +851,18 @@ static int ice_clean_rx_irq(struct ice_rx_ring
> *rx_ring, int budget)
> 
>                 xdp->data = NULL;
>                 rx_ring->first_desc = ntc;
> -               rx_ring->nr_frags = 0;
>                 continue;
>  construct_skb:
>                 skb = xdp_build_skb_from_buff(&xdp->base);
> +               xdp->data = NULL;
> +               rx_ring->first_desc = ntc;
> 
>                 /* exit if we failed to retrieve a buffer */
>                 if (!skb) {
> -                       rx_ring->ring_stats->rx_stats.alloc_page_failed++;
> -                       xdp_verdict = ICE_XDP_CONSUMED;
> -                       xdp->data = NULL;
> -                       rx_ring->first_desc = ntc;
> -                       rx_ring->nr_frags = 0;
> +                       rx_ring->ring_stats->rx_stats.alloc_buf_failed++;
>                         break;
>                 }
> 
> -               xdp->data = NULL;
> -               rx_ring->first_desc = ntc;
> -               rx_ring->nr_frags = 0;
> -
>                 stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S);
>                 if (unlikely(ice_test_staterr(rx_desc->wb.status_error0,
>                                               stat_err_bits))) {

More or less. I'm taking over this series since MichaƂ's on a vacation,
I'll double check everything (against iavf and idpf as well).

Anyway, thanks for the fix.

> 
> 
> --->8---
> 
> The essential change is to not break if libeth_xdp_process_buff returns
> false, since we still need to move the ring forward in this case, and
> the usual reason it returns false is the zero-length descriptor we
> sometimes get when using larger MTUs.
> 
> I also dropped some of the updates and re-ordered how we assign
> xdp->data, and fixed the bug with the ring stats using alloc_page_failed
> instead of alloc_buf_failed like we should have. I think this could be
> further improved or cleaned up, but might be better to wait until the
> full usage of the XDP helpers.
> 
> Regardless, we need something like this to fix the issues with larger MTU.

Thanks,
Olek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ