lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <94795ed1-f7ac-3d1c-9bd6-fcaaaf5f1fd4@gmail.com>
Date:   Sat, 11 Mar 2023 20:56:16 +0000
From:   Pavel Begunkov <asml.silence@...il.com>
To:     Jens Axboe <axboe@...nel.dk>, Breno Leitao <leitao@...ian.org>,
        io-uring@...r.kernel.org
Cc:     leit@...com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] io_uring: One wqe per wq

On 3/10/23 20:38, Jens Axboe wrote:
> On 3/10/23 1:11 PM, Breno Leitao wrote:
>> Right now io_wq allocates one io_wqe per NUMA node.  As io_wq is now
>> bound to a task, the task basically uses only the NUMA local io_wqe, and
>> almost never changes NUMA nodes, thus, the other wqes are mostly
>> unused.
> 
> What if the task gets migrated to a different node? Unless the task
> is pinned to a node/cpumask that is local to that node, it will move
> around freely.

In which case we're screwed anyway and not only for the slow io-wq
path but also with the hot path as rings and all io_uring ctx and
requests won't be migrated locally.

It's also curious whether io-wq workers will get migrated
automatically as they are a part of the thread group.

> I'm not a huge fan of the per-node setup, but I think the reasonings
> given in this patch are a bit too vague and we need to go a bit
> deeper on what a better setup would look like.

-- 
Pavel Begunkov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ