[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230417121903.46218-10-tariqt@nvidia.com>
Date: Mon, 17 Apr 2023 15:18:57 +0300
From: Tariq Toukan <tariqt@...dia.com>
To: "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>
CC: Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Toke Hoiland-Jorgensen <toke@...hat.com>,
<netdev@...r.kernel.org>, Saeed Mahameed <saeedm@...dia.com>,
Lorenzo Bianconi <lorenzo@...nel.org>,
Gal Pressman <gal@...dia.com>,
Henning Fehrmann <henning.fehrmann@....mpg.de>,
"Oliver Behnke" <oliver.behnke@....mpg.de>,
Tariq Toukan <tariqt@...dia.com>
Subject: [PATCH net-next 09/15] net/mlx5e: XDP, Consider large muti-buffer packets in Striding RQ params calculations
Function mlx5e_rx_get_linear_stride_sz() returns PAGE_SIZE immediately
in case an XDP program is attached. The more accurate formula is
ALIGN(sz, PAGE_SIZE), to prevent two packets from residing on the same
page.
The assumption behind the current code is that sz <= PAGE_SIZE holds for
all cases with XDP program set.
This is true because it is being called from:
- 3 times from Striding RQ flows, in which XDP is not supported for such
large packets.
- 1 time from Legacy RQ flow, under the condition
mlx5e_rx_is_linear_skb().
No functional change here, just removing the implied assumption in
preparation for supporting XDP multi-buffer in Striding RQ.
Reviewed-by: Saeed Mahameed <saeedm@...dia.com>
Signed-off-by: Tariq Toukan <tariqt@...dia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en/params.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
index 31f3c6e51d9e..196862e67af3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
@@ -253,17 +253,20 @@ static u32 mlx5e_rx_get_linear_stride_sz(struct mlx5_core_dev *mdev,
struct mlx5e_xsk_param *xsk,
bool mpwqe)
{
+ u32 sz;
+
/* XSK frames are mapped as individual pages, because frames may come in
* an arbitrary order from random locations in the UMEM.
*/
if (xsk)
return mpwqe ? 1 << mlx5e_mpwrq_page_shift(mdev, xsk) : PAGE_SIZE;
- /* XDP in mlx5e doesn't support multiple packets per page. */
- if (params->xdp_prog)
- return PAGE_SIZE;
+ sz = roundup_pow_of_two(mlx5e_rx_get_linear_sz_skb(params, false));
- return roundup_pow_of_two(mlx5e_rx_get_linear_sz_skb(params, false));
+ /* XDP in mlx5e doesn't support multiple packets per page.
+ * Do not assume sz <= PAGE_SIZE if params->xdp_prog is set.
+ */
+ return params->xdp_prog && sz < PAGE_SIZE ? PAGE_SIZE : sz;
}
static u8 mlx5e_mpwqe_log_pkts_per_wqe(struct mlx5_core_dev *mdev,
--
2.34.1
Powered by blists - more mailing lists