lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 29 Apr 2024 14:35:33 +0300
From: Aurelien Aptel <aaptel@...dia.com>
To: Sagi Grimberg <sagi@...mberg.me>, linux-nvme@...ts.infradead.org,
 netdev@...r.kernel.org, hch@....de, kbusch@...nel.org, axboe@...com,
 chaitanyak@...dia.com, davem@...emloft.net, kuba@...nel.org
Cc: Boris Pismenny <borisp@...dia.com>, aurelien.aptel@...il.com,
 smalin@...dia.com, malin1024@...il.com, ogerlitz@...dia.com,
 yorayz@...dia.com, galshalom@...dia.com, mgurtovoy@...dia.com,
 edumazet@...gle.com, pabeni@...hat.com, dsahern@...nel.org,
 ast@...nel.org, jacob.e.keller@...el.com
Subject: Re: [PATCH v24 01/20] net: Introduce direct data placement tcp offload

Sagi Grimberg <sagi@...mberg.me> writes:
> This is not simply a steering rule that can be overwritten at any point?

No, unlike steering rules, the offload resources cannot be moved to a
different queue.

In order to move it we will need to re-create the queue and the
resources assigned to it.  We will consider to improve the HW/FW/SW to
allow this in future versions.

> I was simply referring to the fact that you set config->io_cpu from
> sk->sk_incoming_cpu
> and then you pass sk (and config) to .sk_add, so why does this
> assignment need to
> exist here and not below the interface down at the driver?

You're correct, it doesn't need to exist *if* we use sk->incoming_cpu,
which at the time it is used, is the wrong value.
The right value for cfg->io_cpu is nvme_queue->io_cpu.

So either:
- we do that and thus keep cfg->io_cpu.
- or we remove cfg->io_cpu, and we offload the socket from
  nvme_tcp_io_work() where the io_cpu is implicitly going to be
  the current CPU.

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ