lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ec01745a-b102-4f6e-abc9-abd636d36319@kernel.dk>
Date: Tue, 10 Sep 2024 08:53:02 -0600
From: Jens Axboe <axboe@...nel.dk>
To: Felix Moessbauer <felix.moessbauer@...mens.com>
Cc: asml.silence@...il.com, linux-kernel@...r.kernel.org,
 io-uring@...r.kernel.org, cgroups@...r.kernel.org, dqminh@...udflare.com,
 longman@...hat.com, adriaan.schmidt@...mens.com, florian.bezdeka@...mens.com
Subject: Re: [PATCH 0/2] io_uring/io-wq: respect cgroup cpusets

On 9/10/24 8:33 AM, Felix Moessbauer wrote:
> Hi,
> 
> this series continues the affinity cleanup work started in
> io_uring/sqpoll. It has been tested against the liburing testsuite
> (make runtests), whereby the read-mshot test always fails:
> 
>   Running test read-mshot.t
>   Buffer ring register failed -22
>   test_inc 0 0 failed                                                                                                                          
>   Test read-mshot.t failed with ret 1     
> 
> However, this test also fails on a non-patched linux-next @ 
> bc83b4d1f086.

That sounds very odd... What liburing are you using? On old kernels
where provided buffer rings aren't available the test should just skip,
new ones it should pass. Only thing I can think of is that your liburing
repo isn't current?

> The test wq-aff.t succeeds if at least cpu 0,1 are in the set and
> fails otherwise. This is expected, as the test wants to pin on these
> cpus. I'll send a patch for liburing to skip that test in case this
> pre-condition is not met.
> 
> Regarding backporting: I would like to backport these patches to 6.1 as
> well, as they affect our realtime applications. However, in-between 6.1
> and next there is a major change da64d6db3bd3 ("io_uring: One wqe per
> wq"), which makes the backport tricky. While I don't think we want to
> backport this change, would a dedicated backport of the two pinning
> patches for the old multi-queue implementation have a chance to be accepted?

Let's not backport that patch, just because it's pretty invasive. It's
fine to have a separate backport patch of them for -stable, in this case
we'll have one version for stable kernels new enough to have that
change, and one for older versions. Thankfully not that many to care
about.

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ