[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231223174845.GJ201037@kernel.org>
Date: Sat, 23 Dec 2023 17:48:45 +0000
From: Simon Horman <horms@...nel.org>
To: Aurelien Aptel <aaptel@...dia.com>
Cc: linux-nvme@...ts.infradead.org, netdev@...r.kernel.org,
sagi@...mberg.me, hch@....de, kbusch@...nel.org, axboe@...com,
chaitanyak@...dia.com, davem@...emloft.net, kuba@...nel.org,
aurelien.aptel@...il.com, smalin@...dia.com, malin1024@...il.com,
ogerlitz@...dia.com, yorayz@...dia.com, borisp@...dia.com,
galshalom@...dia.com, mgurtovoy@...dia.com
Subject: Re: [PATCH v22 16/20] net/mlx5e: NVMEoTCP, queue init/teardown
On Thu, Dec 21, 2023 at 09:33:54PM +0000, Aurelien Aptel wrote:
> From: Ben Ben-Ishay <benishay@...dia.com>
>
> Adds the ddp ops of sk_add, sk_del and offload limits.
>
> When nvme-tcp establishes new queue/connection, the sk_add op is called.
> We allocate a hardware context to offload operations for this queue:
> - use a steering rule based on the connection 5-tuple to mark packets
> of this queue/connection with a flow-tag in their completion (cqe)
> - use a dedicated TIR to identify the queue and maintain the HW context
> - use a dedicated ICOSQ to maintain the HW context by UMR postings
> - use a dedicated tag buffer for buffer registration
> - maintain static and progress HW contexts by posting the proper WQEs.
>
> When nvme-tcp teardowns a queue/connection, the sk_del op is called.
> We teardown the queue and free the corresponding contexts.
>
> The offload limits we advertise deal with the max SG supported.
>
> [Re-enabled calling open/close icosq out of en_main.c]
>
> Signed-off-by: Ben Ben-Ishay <benishay@...dia.com>
> Signed-off-by: Boris Pismenny <borisp@...dia.com>
> Signed-off-by: Or Gerlitz <ogerlitz@...dia.com>
> Signed-off-by: Yoray Zack <yorayz@...dia.com>
> Signed-off-by: Aurelien Aptel <aaptel@...dia.com>
> Reviewed-by: Tariq Toukan <tariqt@...dia.com>
...
> +static int
> +mlx5e_nvmeotcp_build_icosq(struct mlx5e_nvmeotcp_queue *queue, struct mlx5e_priv *priv, int io_cpu)
> +{
> + u16 max_sgl, max_klm_per_wqe, max_umr_per_ccid, sgl_rest, wqebbs_rest;
> + struct mlx5e_channel *c = priv->channels.c[queue->channel_ix];
> + struct mlx5e_sq_param icosq_param = {};
> + struct mlx5e_create_cq_param ccp = {};
> + struct dim_cq_moder icocq_moder = {};
> + struct mlx5e_icosq *icosq;
> + int err = -ENOMEM;
> + u16 log_icosq_sz;
> + u32 max_wqebbs;
> +
> + icosq = &queue->sq;
> + max_sgl = mlx5e_get_max_sgl(priv->mdev);
> + max_klm_per_wqe = queue->max_klms_per_wqe;
> + max_umr_per_ccid = max_sgl / max_klm_per_wqe;
> + sgl_rest = max_sgl % max_klm_per_wqe;
> + wqebbs_rest = sgl_rest ? MLX5E_KLM_UMR_WQEBBS(sgl_rest) : 0;
> + max_wqebbs = (MLX5E_KLM_UMR_WQEBBS(max_klm_per_wqe) *
> + max_umr_per_ccid + wqebbs_rest) * queue->size;
> + log_icosq_sz = order_base_2(max_wqebbs);
> +
> + mlx5e_build_icosq_param(priv->mdev, log_icosq_sz, &icosq_param);
> + ccp.napi = &queue->qh.napi;
> + ccp.ch_stats = &priv->channel_stats[queue->channel_ix]->ch;
> + ccp.node = cpu_to_node(io_cpu);
> + ccp.ix = queue->channel_ix;
> +
> + err = mlx5e_open_cq(priv, icocq_moder, &icosq_param.cqp, &ccp, &icosq->cq);
Hi Aurelien and Ben,
This doesn't seem to compile with gcc-13 with allmodconfig on x86_64:
.../nvmeotcp.c: In function 'mlx5e_nvmeotcp_build_icosq':
.../nvmeotcp.c:472:29: error: passing argument 1 of 'mlx5e_open_cq' from incompatible pointer type [-Werror=incompatible-pointer-types]
472 | err = mlx5e_open_cq(priv, icocq_moder, &icosq_param.cqp, &ccp, &icosq->cq);
| ^~~~
| |
| struct mlx5e_priv *
In file included from .../nvmeotcp.h:9,
from .../nvmeotcp.c:7:
....h:1065:41: note: expected 'struct mlx5_core_dev *' but argument is of type 'struct mlx5e_priv *'
1065 | int mlx5e_open_cq(struct mlx5_core_dev *mdev, struct dim_cq_moder moder,
| ~~~~~~~~~~~~~~~~~~~~~~^~~~
cc1: all warnings being treated as errors
> + if (err)
> + goto err_nvmeotcp_sq;
> + err = mlx5e_open_icosq(c, &priv->channels.params, &icosq_param, icosq,
> + mlx5e_nvmeotcp_icosq_err_cqe_work);
> + if (err)
> + goto close_cq;
> +
> + spin_lock_init(&queue->sq_lock);
> + return 0;
> +
> +close_cq:
> + mlx5e_close_cq(&icosq->cq);
> +err_nvmeotcp_sq:
> + return err;
> +}
...
Powered by blists - more mailing lists