[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <253v8c5fdc3.fsf@nvidia.com>
Date: Wed, 20 Sep 2023 11:39:24 +0300
From: Aurelien Aptel <aaptel@...dia.com>
To: Sagi Grimberg <sagi@...mberg.me>, linux-nvme@...ts.infradead.org,
netdev@...r.kernel.org, hch@....de, kbusch@...nel.org, axboe@...com,
chaitanyak@...dia.com, davem@...emloft.net, kuba@...nel.org
Cc: Boris Pismenny <borisp@...dia.com>, aurelien.aptel@...il.com,
smalin@...dia.com, malin1024@...il.com, ogerlitz@...dia.com,
yorayz@...dia.com, galshalom@...dia.com, mgurtovoy@...dia.com
Subject: Re: [PATCH v15 06/20] nvme-tcp: Add DDP data-path
Sagi Grimberg <sagi@...mberg.me> writes:
> Can you please explain why? sk_incoming_cpu is updated from the network
> recv path while you are arguing that the timing matters before you even
> send the pdu. I don't understand why should that matter.
Sorry, the original answer was misleading.
The problem is not about the timing but only about which CPU the code is
running on. If we move setup_ddp() earlier as you suggested, it can
result it running on the wrong CPU.
Calling setup_ddp() in nvme_tcp_setup_cmd_pdu() will not guarantee we
are on running on the queue->io_cpu. It's only during
nvme_tcp_queue_request() that we either know we are running on
queue->io_cpu, or dispatch it to run on queue->io_cpu.
As it is only a performance optimization for the non-likely case, we can
move it to nvme_tcp_setup_cmd_pdu() as you suggested and re-consider in
the future if it will be needed.
Thanks
Powered by blists - more mailing lists