lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aWpFfQ8se5ipy7G+@lizhi-Precision-Tower-5810>
Date: Fri, 16 Jan 2026 09:04:45 -0500
From: Frank Li <Frank.li@....com>
To: Wei Fang <wei.fang@....com>
Cc: shenwei.wang@....com, xiaoning.wang@....com, andrew+netdev@...n.ch,
	davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
	pabeni@...hat.com, ast@...nel.org, daniel@...earbox.net,
	hawk@...nel.org, john.fastabend@...il.com, sdf@...ichev.me,
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
	imx@...ts.linux.dev, bpf@...r.kernel.org
Subject: Re: [PATCH v2 net-next 07/14] net: fec: transmit XDP frames in bulk

On Fri, Jan 16, 2026 at 03:40:20PM +0800, Wei Fang wrote:
> Currently, the driver writes the ENET_TDAR register for every XDP frame
> to trigger transmit start. Frequent MMIO writes consume more CPU cycles
> and may reduce XDP TX performance, so transmit XDP frames in bulk.
>
> Signed-off-by: Wei Fang <wei.fang@....com>
> ---
Reviewed-by: Frank Li <Frank.Li@....com>
>  drivers/net/ethernet/freescale/fec_main.c | 12 +++++++++---
>  1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
> index 251191ab99b3..52abeeb50dda 100644
> --- a/drivers/net/ethernet/freescale/fec_main.c
> +++ b/drivers/net/ethernet/freescale/fec_main.c
> @@ -2006,6 +2006,8 @@ static int fec_enet_rx_queue_xdp(struct fec_enet_private *fep, int queue,
>  				rxq->stats[RX_XDP_TX_ERRORS]++;
>  				fec_xdp_drop(rxq, &xdp, sync);
>  				trace_xdp_exception(ndev, prog, XDP_TX);
> +			} else {
> +				xdp_res |= FEC_ENET_XDP_TX;
>  			}
>  			break;
>  		default:
> @@ -2055,6 +2057,10 @@ static int fec_enet_rx_queue_xdp(struct fec_enet_private *fep, int queue,
>  	if (xdp_res & FEC_ENET_XDP_REDIR)
>  		xdp_do_flush();
>
> +	if (xdp_res & FEC_ENET_XDP_TX)
> +		/* Trigger transmission start */
> +		fec_txq_trigger_xmit(fep, fep->tx_queue[tx_qid]);
> +
>  	return pkt_received;
>  }
>
> @@ -4036,9 +4042,6 @@ static int fec_enet_txq_xmit_frame(struct fec_enet_private *fep,
>
>  	txq->bd.cur = bdp;
>
> -	/* Trigger transmission start */
> -	fec_txq_trigger_xmit(fep, txq);
> -
>  	return 0;
>  }
>
> @@ -4088,6 +4091,9 @@ static int fec_enet_xdp_xmit(struct net_device *dev,
>  		sent_frames++;
>  	}
>
> +	if (sent_frames)
> +		fec_txq_trigger_xmit(fep, txq);
> +
>  	__netif_tx_unlock(nq);
>
>  	return sent_frames;
> --
> 2.34.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ