[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <24ea956e-40a2-8b7b-cf8a-b604e7cd5644@grimberg.me>
Date: Thu, 8 Oct 2020 16:00:30 -0700
From: Sagi Grimberg <sagi@...mberg.me>
To: Boris Pismenny <borisp@...lanox.com>, kuba@...nel.org,
davem@...emloft.net, saeedm@...dia.com, hch@....de, axboe@...com,
kbusch@...nel.org, viro@...iv.linux.org.uk, edumazet@...gle.com
Cc: Yoray Zack <yorayz@...lanox.com>,
Ben Ben-Ishay <benishay@...lanox.com>,
boris.pismenny@...il.com, linux-nvme@...ts.infradead.org,
netdev@...r.kernel.org, Or Gerlitz <ogerlitz@...lanox.com>
Subject: Re: [PATCH net-next RFC v1 06/10] nvme-tcp: Add DDP data-path
>> static
>> int nvme_tcp_offload_socket(struct nvme_tcp_queue *queue,
>> struct nvme_tcp_config *config)
>> @@ -630,6 +720,7 @@ static void nvme_tcp_error_recovery(struct
>> nvme_ctrl *ctrl)
>> static int nvme_tcp_process_nvme_cqe(struct nvme_tcp_queue *queue,
>> struct nvme_completion *cqe)
>> {
>> + struct nvme_tcp_request *req;
>> struct request *rq;
>> rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), cqe->command_id);
>> @@ -641,8 +732,15 @@ static int nvme_tcp_process_nvme_cqe(struct
>> nvme_tcp_queue *queue,
>> return -EINVAL;
>> }
>> - if (!nvme_try_complete_req(rq, cqe->status, cqe->result))
>> - nvme_complete_rq(rq);
>> + req = blk_mq_rq_to_pdu(rq);
>> + if (req->offloaded) {
>> + req->status = cqe->status;
>> + req->result = cqe->result;
>> + nvme_tcp_teardown_ddp(queue, cqe->command_id, rq);
>> + } else {
>> + if (!nvme_try_complete_req(rq, cqe->status, cqe->result))
>> + nvme_complete_rq(rq);
>> + }
Oh forgot to ask,
We have places in the driver that we may complete (cancel) one
or more requests from the error recovery or timeout flow. We
first prevent future incoming RX on the socket such that we
can safely cancel requests. This may break with the deferred
completion in ddp_teardown_done.
If I have a request that is waiting for ddp_teardown_done do
I have a way to tell the HW to never call ddp_teardown_done
on a specific socket?
If so the place to is in nvme_tcp_stop_queue.
Powered by blists - more mailing lists