lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b82d4ecd-77f3-a562-ec5c-48b0c8ed06f8@grimberg.me>
Date: Wed, 20 Sep 2023 13:11:47 +0300
From: Sagi Grimberg <sagi@...mberg.me>
To: Aurelien Aptel <aaptel@...dia.com>, linux-nvme@...ts.infradead.org,
 netdev@...r.kernel.org, hch@....de, kbusch@...nel.org, axboe@...com,
 chaitanyak@...dia.com, davem@...emloft.net, kuba@...nel.org
Cc: Boris Pismenny <borisp@...dia.com>, aurelien.aptel@...il.com,
 smalin@...dia.com, malin1024@...il.com, ogerlitz@...dia.com,
 yorayz@...dia.com, galshalom@...dia.com, mgurtovoy@...dia.com
Subject: Re: [PATCH v15 06/20] nvme-tcp: Add DDP data-path


>> Can you please explain why? sk_incoming_cpu is updated from the network
>> recv path while you are arguing that the timing matters before you even
>> send the pdu. I don't understand why should that matter.
> 
> Sorry, the original answer was misleading.
> The problem is not about the timing but only about which CPU the code is
> running on.  If we move setup_ddp() earlier as you suggested, it can
> result it running on the wrong CPU.

Please define wrong CPU.

> Calling setup_ddp() in nvme_tcp_setup_cmd_pdu() will not guarantee we
> are on running on the queue->io_cpu.
> It's only during nvme_tcp_queue_request() that we either know we are running on
> queue->io_cpu, or dispatch it to run on queue->io_cpu.

But the sk_incmoing_cpu is updated with the cpu that is reading the
socket, so in fact it should converge to the io_cpu - shouldn't it?

Can you please provide a concrete explanation to the performance
degradation?

> As it is only a performance optimization for the non-likely case, we can
> move it to nvme_tcp_setup_cmd_pdu() as you suggested and re-consider in
> the future if it will be needed.

Would still like to understand this case.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ