[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <812d2df33f801e594cc6ee774c6625def6c9a5c1.camel@nvidia.com>
Date: Mon, 1 Aug 2022 08:08:09 +0000
From: Maxim Mikityanskiy <maximmi@...dia.com>
To: "bjorn@...nel.org" <bjorn@...nel.org>,
"maciej.fijalkowski@...el.com" <maciej.fijalkowski@...el.com>,
"magnus.karlsson@...el.com" <magnus.karlsson@...el.com>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"kuba@...nel.org" <kuba@...nel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>
CC: Tariq Toukan <tariqt@...dia.com>, Gal Pressman <gal@...dia.com>,
"john.fastabend@...il.com" <john.fastabend@...il.com>,
"daniel@...earbox.net" <daniel@...earbox.net>,
"jonathan.lemon@...il.com" <jonathan.lemon@...il.com>,
"ast@...nel.org" <ast@...nel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"hawk@...nel.org" <hawk@...nel.org>,
Saeed Mahameed <saeedm@...dia.com>
Subject: Re: [PATCH net] net/mlx5e: xsk: Discard unaligned XSK frames on
striding RQ
Any comments on this patch, or can it be merged?
Saeed reviewed the mlx5 part.
Björn, Magnus, Maciej, anything to say about the XSK drv part?
On Fri, 2022-07-29 at 15:13 +0300, Maxim Mikityanskiy wrote:
> Striding RQ uses MTT page mapping, where each page corresponds to an XSK
> frame. MTT pages have alignment requirements, and XSK frames don't have
> any alignment guarantees in the unaligned mode. Frames with improper
> alignment must be discarded, otherwise the packet data will be written
> at a wrong address.
>
> Fixes: 282c0c798f8e ("net/mlx5e: Allow XSK frames smaller than a page")
> Signed-off-by: Maxim Mikityanskiy <maximmi@...dia.com>
> Reviewed-by: Tariq Toukan <tariqt@...dia.com>
> Reviewed-by: Saeed Mahameed <saeedm@...dia.com>
> ---
> .../net/ethernet/mellanox/mlx5/core/en/xsk/rx.h | 14 ++++++++++++++
> include/net/xdp_sock_drv.h | 11 +++++++++++
> 2 files changed, 25 insertions(+)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h
> index a8cfab4a393c..cc18d97d8ee0 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h
> @@ -7,6 +7,8 @@
> #include "en.h"
> #include <net/xdp_sock_drv.h>
>
> +#define MLX5E_MTT_PTAG_MASK 0xfffffffffffffff8ULL
> +
> /* RX data path */
>
> struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
> @@ -21,6 +23,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq,
> static inline int mlx5e_xsk_page_alloc_pool(struct mlx5e_rq *rq,
> struct mlx5e_dma_info *dma_info)
> {
> +retry:
> dma_info->xsk = xsk_buff_alloc(rq->xsk_pool);
> if (!dma_info->xsk)
> return -ENOMEM;
> @@ -32,6 +35,17 @@ static inline int mlx5e_xsk_page_alloc_pool(struct mlx5e_rq *rq,
> */
> dma_info->addr = xsk_buff_xdp_get_frame_dma(dma_info->xsk);
>
> + /* MTT page mapping has alignment requirements. If they are not
> + * satisfied, leak the descriptor so that it won't come again, and try
> + * to allocate a new one.
> + */
> + if (rq->wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) {
> + if (unlikely(dma_info->addr & ~MLX5E_MTT_PTAG_MASK)) {
> + xsk_buff_discard(dma_info->xsk);
> + goto retry;
> + }
> + }
> +
> return 0;
> }
>
> diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h
> index 4aa031849668..0774ce97c2f1 100644
> --- a/include/net/xdp_sock_drv.h
> +++ b/include/net/xdp_sock_drv.h
> @@ -95,6 +95,13 @@ static inline void xsk_buff_free(struct xdp_buff *xdp)
> xp_free(xskb);
> }
>
> +static inline void xsk_buff_discard(struct xdp_buff *xdp)
> +{
> + struct xdp_buff_xsk *xskb = container_of(xdp, struct xdp_buff_xsk, xdp);
> +
> + xp_release(xskb);
> +}
> +
> static inline void xsk_buff_set_size(struct xdp_buff *xdp, u32 size)
> {
> xdp->data = xdp->data_hard_start + XDP_PACKET_HEADROOM;
> @@ -238,6 +245,10 @@ static inline void xsk_buff_free(struct xdp_buff *xdp)
> {
> }
>
> +static inline void xsk_buff_discard(struct xdp_buff *xdp)
> +{
> +}
> +
> static inline void xsk_buff_set_size(struct xdp_buff *xdp, u32 size)
> {
> }
Powered by blists - more mailing lists