lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1a28b970-1954-a482-5906-c6ee96b248f0@grimberg.me>
Date: Mon, 14 Aug 2023 22:01:14 +0300
From: Sagi Grimberg <sagi@...mberg.me>
To: Aurelien Aptel <aaptel@...dia.com>, linux-nvme@...ts.infradead.org,
 netdev@...r.kernel.org, hch@....de, kbusch@...nel.org, axboe@...com,
 chaitanyak@...dia.com, davem@...emloft.net, kuba@...nel.org
Cc: Boris Pismenny <borisp@...dia.com>, aurelien.aptel@...il.com,
 smalin@...dia.com, malin1024@...il.com, ogerlitz@...dia.com,
 yorayz@...dia.com, galshalom@...dia.com, mgurtovoy@...dia.com
Subject: Re: [PATCH v12 08/26] nvme-tcp: Add DDP data-path


>>> @@ -1308,6 +1407,15 @@ static int nvme_tcp_try_send_cmd_pdu(struct nvme_tcp_request *req)
>>>        else
>>>                msg.msg_flags |= MSG_EOR;
>>>
>>> +     if (test_bit(NVME_TCP_Q_OFF_DDP, &queue->flags)) {
>>> +             ret = nvme_tcp_setup_ddp(queue, pdu->cmd.common.command_id,
>>> +                                      blk_mq_rq_from_pdu(req));
>>> +             WARN_ONCE(ret, "ddp setup failed (queue 0x%x, cid 0x%x, ret=%d)",
>>> +                       nvme_tcp_queue_id(queue),
>>> +                       pdu->cmd.common.command_id,
>>> +                       ret);
>>> +     }
>>
>> Any reason why this is done here when sending the command pdu and not
>> in setup time?
> 
> We wish to interact with the HW from the same CPU per queue, hence we
> are calling setup_ddp() after queue->io_cpu == raw_smp_processor_id()
> was checked in nvme_tcp_queue_request().

That is very fragile. You cannot depend on this micro-optimization being
in the code. Is this related to a hidden steering rule you are adding
to the hw?

Which reminds me, in the control patch, you are passing io_cpu, this is
also a dependency that should be avoided, you should use the same 
mechanism as arfs to learn where the socket is being reaped.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ