[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <37861060-9651-49c8-e583-2b070914361c@gmail.com>
Date: Mon, 18 Jan 2021 20:47:08 -0700
From: David Ahern <dsahern@...il.com>
To: Boris Pismenny <borisp@...lanox.com>, kuba@...nel.org,
davem@...emloft.net, saeedm@...dia.com, hch@....de,
sagi@...mberg.me, axboe@...com, kbusch@...nel.org,
viro@...iv.linux.org.uk, edumazet@...gle.com, smalin@...vell.com
Cc: boris.pismenny@...il.com, linux-nvme@...ts.infradead.org,
netdev@...r.kernel.org, benishay@...dia.com, ogerlitz@...dia.com,
yorayz@...dia.com, Ben Ben-Ishay <benishay@...lanox.com>,
Or Gerlitz <ogerlitz@...lanox.com>,
Yoray Zack <yorayz@...lanox.com>
Subject: Re: [PATCH v2 net-next 06/21] nvme-tcp: Add DDP offload control path
On 1/14/21 8:10 AM, Boris Pismenny wrote:
> +static
> +int nvme_tcp_offload_socket(struct nvme_tcp_queue *queue)
> +{
> + struct net_device *netdev = get_netdev_for_sock(queue->sock->sk, true);
> + struct nvme_tcp_ddp_config config = {};
> + int ret;
> +
> + if (!netdev) {
> + dev_info_ratelimited(queue->ctrl->ctrl.device, "netdev not found\n");
> + return -ENODEV;
> + }
> +
> + if (!(netdev->features & NETIF_F_HW_TCP_DDP)) {
> + dev_put(netdev);
> + return -EOPNOTSUPP;
> + }
> +
> + config.cfg.type = TCP_DDP_NVME;
> + config.pfv = NVME_TCP_PFV_1_0;
> + config.cpda = 0;
> + config.dgst = queue->hdr_digest ?
> + NVME_TCP_HDR_DIGEST_ENABLE : 0;
> + config.dgst |= queue->data_digest ?
> + NVME_TCP_DATA_DIGEST_ENABLE : 0;
> + config.queue_size = queue->queue_size;
> + config.queue_id = nvme_tcp_queue_id(queue);
> + config.io_cpu = queue->io_cpu;
> +
> + ret = netdev->tcp_ddp_ops->tcp_ddp_sk_add(netdev,
> + queue->sock->sk,
> + (struct tcp_ddp_config *)&config);
typecast is not needed; tcp_ddp_config is an element of nvme_tcp_ddp_config
> + if (ret) {
> + dev_put(netdev);
> + return ret;
> + }
> +
> + inet_csk(queue->sock->sk)->icsk_ulp_ddp_ops = &nvme_tcp_ddp_ulp_ops;
> + if (netdev->features & NETIF_F_HW_TCP_DDP)
> + set_bit(NVME_TCP_Q_OFF_DDP, &queue->flags);
> +
> + return ret;
> +}
> +
> +static
> +void nvme_tcp_unoffload_socket(struct nvme_tcp_queue *queue)
> +{
> + struct net_device *netdev = queue->ctrl->offloading_netdev;
> +
> + if (!netdev) {
> + dev_info_ratelimited(queue->ctrl->ctrl.device, "netdev not found\n");
> + return;
> + }
> +
> + netdev->tcp_ddp_ops->tcp_ddp_sk_del(netdev, queue->sock->sk);
> +
> + inet_csk(queue->sock->sk)->icsk_ulp_ddp_ops = NULL;
> + dev_put(netdev); /* put the queue_init get_netdev_for_sock() */
have you validated the netdev reference counts? You have a put here, and ...
> +}
> +
> +static
> +int nvme_tcp_offload_limits(struct nvme_tcp_queue *queue)
> +{
> + struct net_device *netdev = get_netdev_for_sock(queue->sock->sk, true);
... a get here ....
> + struct tcp_ddp_limits limits;
> + int ret = 0;
> +
> + if (!netdev) {
> + dev_info_ratelimited(queue->ctrl->ctrl.device, "netdev not found\n");
> + return -ENODEV;
> + }
> +
> + if (netdev->features & NETIF_F_HW_TCP_DDP &&
> + netdev->tcp_ddp_ops &&
> + netdev->tcp_ddp_ops->tcp_ddp_limits)
> + ret = netdev->tcp_ddp_ops->tcp_ddp_limits(netdev, &limits);
> + else
> + ret = -EOPNOTSUPP;
> +
> + if (!ret) {
> + queue->ctrl->offloading_netdev = netdev;
... you have the device here, but then ...
> + dev_dbg_ratelimited(queue->ctrl->ctrl.device,
> + "netdev %s offload limits: max_ddp_sgl_len %d\n",
> + netdev->name, limits.max_ddp_sgl_len);
> + queue->ctrl->ctrl.max_segments = limits.max_ddp_sgl_len;
> + queue->ctrl->ctrl.max_hw_sectors =
> + limits.max_ddp_sgl_len << (ilog2(SZ_4K) - 9);
> + } else {
> + queue->ctrl->offloading_netdev = NULL;
> + }
> +
> + dev_put(netdev);
... put here. And this is the limit checking function which seems like
an odd place to set offloading_netdev vs nvme_tcp_offload_socket which
sets no queue variable but yet hangs on to a netdev reference count.
netdev reference count leaks are an absolute PITA to find. Code that
takes and puts the counts should be clear and obvious as to when and
why. The symmetry of offload and unoffload are clear when the offload
saves the address in offloading_netdev. What you have now is dubious.
Powered by blists - more mailing lists