lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0888d6a3-8dea-455b-893f-d8d929e827e2@efficios.com>
Date: Thu, 13 Feb 2025 12:31:02 -0500
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Gabriele Monaco <gmonaco@...hat.com>, linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, aubrey.li@...ux.intel.com, yu.c.chen@...el.com,
 Andrew Morton <akpm@...ux-foundation.org>, Ingo Molnar <mingo@...hat.com>,
 Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...nel.org>,
 "Paul E. McKenney" <paulmck@...nel.org>, Shuah Khan <shuah@...nel.org>
Subject: Re: [PATCH v6 2/3] sched: Move task_mm_cid_work to mm delayed work

On 2025-02-13 08:25, Gabriele Monaco wrote:
> On Thu, 2025-02-13 at 14:52 +0800, kernel test robot wrote:
>> kernel test robot noticed
>> "WARNING:at_kernel/workqueue.c:#__queue_delayed_work" on:
>>
>> [    2.640924][    T0] ------------[ cut here ]------------
>> [ 2.641646][ T0] WARNING: CPU: 0 PID: 0 at kernel/workqueue.c:2495
>> __queue_delayed_work (kernel/workqueue.c:2495 (discriminator 9))
>> [    2.642874][    T0] Modules linked in:
>> [    2.643381][    T0] CPU: 0 UID: 0 PID: 0 Comm: swapper Not tainted
>> 6.14.0-rc2-00002-g287adf9e9c1f #1
>> [    2.644582][    T0] Hardware name: QEMU Standard PC (i440FX +
>> PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
>> [ 2.645943][ T0] RIP: 0010:__queue_delayed_work
>> (kernel/workqueue.c:2495 (discriminator 9))
> 
> There seem to be major problems with this configuration, I'm trying to
> understand what's wrong but, for the time being, this patchset is not
> ready for inclusion.

I'm staring at this now, and I'm thinking we could do a simpler change
that would solve your RT issues without having to introduce a dependency
on workqueue.c.

So if the culprit is that task_mm_cid_work() runs for too long on large
many-cpus systems, why not break it up into smaller iterations ?

So rather than iterating on "for_each_possible_cpu", we could simply
break this down into iteration on at most N cpus, so:

tick #1: iteration on CPUs 0 ..   N - 1
tick #2: iteration on CPUs N .. 2*N - 1
...
circling back to 0 when it reaches the number of possible cpus.

This N value could be configurable, e.g. CONFIG_RSEQ_CID_SCAN_BATCH,
with a sane default. An RT system could decide to make that value lower.

Then all we need to do is remember which was that last observed cpu
number in the mm struct, so the next tick picks up from there.

The main downside of this approach compared to scheduling delayed
work in a workqueue is that it depends on having the mm be current when
the scheduler tick happens. But perhaps this is something we could fix
in a different way that does not add a dependency on workqueue. I'm not
sure how though.

Thoughts ?

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ