[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <253msraujw2.fsf@nvidia.com>
Date: Thu, 07 Mar 2024 17:44:13 +0200
From: Aurelien Aptel <aaptel@...dia.com>
To: Sagi Grimberg <sagi@...mberg.me>, linux-nvme@...ts.infradead.org,
netdev@...r.kernel.org, hch@....de, kbusch@...nel.org, axboe@...com,
chaitanyak@...dia.com, davem@...emloft.net, kuba@...nel.org
Cc: Boris Pismenny <borisp@...dia.com>, aurelien.aptel@...il.com,
smalin@...dia.com, malin1024@...il.com, ogerlitz@...dia.com,
yorayz@...dia.com, galshalom@...dia.com, mgurtovoy@...dia.com
Subject: Re: [PATCH v23 06/20] nvme-tcp: Add DDP data-path
Sagi Grimberg <sagi@...mberg.me> writes:
>> +static void nvme_tcp_complete_request(struct request *rq,
>> + __le16 status,
>> + union nvme_result result,
>> + __u16 command_id)
>> +{
>> + struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
>> +
>> + if (nvme_tcp_is_ddp_offloaded(req)) {
>> + req->nvme_status = status;
>
> this can just be called req->status I think.
Since req->status already exists, we have checked whether it can be
safely used instead of adding nvme_status and it seems to be ok.
We will remove nvme_status.
>> + req->result = result;
> I think it will be cleaner to always capture req->result and req->status
> regardless of ddp offload.
Sure, we will set status and result in the function before the offload
check:
static void nvme_tcp_complete_request(struct request *rq,
__le16 status,
union nvme_result result,
__u16 command_id)
{
struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
req->status = status;
req->result = result;
if (nvme_tcp_is_ddp_offloaded(req)) {
/* complete when teardown is confirmed to be done */
nvme_tcp_teardown_ddp(req->queue, rq);
return;
}
if (!nvme_try_complete_req(rq, status, result))
nvme_complete_rq(rq);
}
Powered by blists - more mailing lists