[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <253msuwirzi.fsf@nvidia.com>
Date: Wed, 29 Nov 2023 15:55:29 +0200
From: Aurelien Aptel <aaptel@...dia.com>
To: Sagi Grimberg <sagi@...mberg.me>, linux-nvme@...ts.infradead.org,
netdev@...r.kernel.org, hch@....de, kbusch@...nel.org, axboe@...com,
chaitanyak@...dia.com, davem@...emloft.net, kuba@...nel.org
Cc: Boris Pismenny <borisp@...dia.com>, aurelien.aptel@...il.com,
smalin@...dia.com, malin1024@...il.com, ogerlitz@...dia.com,
yorayz@...dia.com, galshalom@...dia.com, mgurtovoy@...dia.com
Subject: Re: [PATCH v20 06/20] nvme-tcp: Add DDP data-path
Sagi Grimberg <sagi@...mberg.me> writes:
>> +static void nvme_tcp_complete_request(struct request *rq,
>> + __le16 status,
>> + union nvme_result result,
>> + __u16 command_id)
>> +{
>> +#ifdef CONFIG_ULP_DDP
>> + struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
>> +
>> + if (req->offloaded) {
>> + req->ddp_status = status;
>
> unless this is really a ddp_status, don't name it as such. afiact
> it is the nvme status, so lets stay consistent with the naming.
>
> btw, for making the code simpler we can promote the request
> status/result capture out of CONFIG_ULP_DDP to the general logic
> and then I think the code will look slightly simpler.
>
> This will be consistent with what we do in nvme-rdma and PI.
Ok, we will rename satuts to nvme_status and move it and result out of
the ifdef.
>> @@ -1283,6 +1378,9 @@ static int nvme_tcp_try_send_cmd_pdu(struct nvme_tcp_request *req)
>> else
>> msg.msg_flags |= MSG_EOR;
>>
>> + if (test_bit(NVME_TCP_Q_OFF_DDP, &queue->flags))
>> + nvme_tcp_setup_ddp(queue, blk_mq_rq_from_pdu(req));
>> +
>
> We keep coming back to this. Why isn't setup done at setup time?
Sorry, this is a left-over from previous tests, we will move it as we
agreed last time [1].
1: https://lore.kernel.org/all/ef66595c-95cd-94c4-7f51-d3d7683a188a@grimberg.me/
Powered by blists - more mailing lists