[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e25ad472-7815-41a2-83b1-93cc364e894b@bytedance.com>
Date: Fri, 31 Oct 2025 16:20:53 -0700
From: Zijian Zhang <zijianzhang@...edance.com>
To: netdev@...r.kernel.org
Cc: davem@...emloft.net, kuba@...nel.org, pabeni@...hat.com,
edumazet@...gle.com, andrew+netdev@...n.ch, saeedm@...dia.com,
gal@...dia.com, leonro@...dia.com, witu@...dia.com, parav@...dia.com,
tariqt@...dia.com, hkelam@...vell.com
Subject: Re: [PATCH v2] net/mlx5e: Modify mlx5e_xdp_xmit sq selection
Patch does not apply, please ignore this patch, a new one has been sent.
On 10/31/25 3:27 PM, Zijian Zhang wrote:
> From: Zijian Zhang <zijianzhang@...edance.com>
>
> When performing XDP_REDIRECT from one mlnx device to another, using
> smp_processor_id() to select the queue may go out-of-range.
>
> Assume eth0 is redirecting a packet to eth1, eth1 is configured
> with only 8 channels, while eth0 has its RX queues pinned to
> higher-numbered CPUs (e.g. CPU 12). When a packet is received on
> such a CPU and redirected to eth1, the driver uses smp_processor_id()
> as the SQ index. Since the CPU ID is larger than the number of queues
> on eth1, the lookup (priv->channels.c[sq_num]) goes out of range and
> the redirect fails.
>
> This patch fixes the issue by mapping the CPU ID to a valid channel
> index using modulo arithmetic.
>
> sq_num = smp_processor_id() % priv->channels.num;
>
> With this change, XDP_REDIRECT works correctly even when the source
> device uses high CPU affinities and the target device has fewer TX
> queues.
>
> v2:
> Suggested by Jakub Kicinski, I add a lock to synchronize TX when
> xdp redirects packets on the same queue.
>
> Signed-off-by: Zijian Zhang <zijianzhang@...edance.com>
> Reviewed-by: Hariprasad Kelam <hkelam@...vell.com>
> ---
> drivers/net/ethernet/mellanox/mlx5/core/en.h | 3 +++
> drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c | 8 +++-----
> drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 ++
> 3 files changed, 8 insertions(+), 5 deletions(-)
...
Powered by blists - more mailing lists