lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ldkbdmbu.ffs@tglx>
Date: Wed, 12 Nov 2025 21:31:49 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Prakash Sangappa
 <prakash.sangappa@...cle.com>
Cc: LKML <linux-kernel@...r.kernel.org>, Peter Zijlstra
 <peterz@...radead.org>, "Paul E. McKenney" <paulmck@...nel.org>, Boqun
 Feng <boqun.feng@...il.com>, Jonathan Corbet <corbet@....net>, Madadi
 Vineeth Reddy <vineethr@...ux.ibm.com>, K Prateek Nayak
 <kprateek.nayak@....com>, Steven Rostedt <rostedt@...dmis.org>, Sebastian
 Andrzej Siewior <bigeasy@...utronix.de>, Arnd Bergmann <arnd@...db.de>,
 "linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>
Subject: Re: [patch V3 00/12] rseq: Implement time slice extension mechanism

On Tue, Nov 11 2025 at 11:42, Mathieu Desnoyers wrote:
> On 2025-11-10 09:23, Mathieu Desnoyers wrote:
> I've spent some time digging through Thomas' implementation of
> mm_cid management. I've spotted something which may explain
> the watchdog panic. Here is the scenario:
>
> 1) A process is constrained to a subset of the possible CPUs,
>     and has enough threads to swap from per-thread to per-cpu mm_cid
>     mode. It runs happily in that per-cpu mode.
>
> 2) The number of allowed CPUs is increased for a process, thus invoking
>     mm_update_cpus_allowed. This switches the mode back to per-thread,
>     but delays invocation of mm_cid_work_fn to some point in the future,
>     in thread context, through irq_work + schedule_work.
>
>     At that point, because only __mm_update_max_cids was called by
>     mm_update_cpus_allowed, the max_cids is updated, but mc->transit
>     is still zero.
>
>     Also, until mm_cid_fixup_cpus_to_tasks is invoked by either the
>     scheduled work or near the end of sched_mm_cid_fork, or by
>     sched_mm_cid_exit, we are in a state where mm_cids are still
>     owned by CPUs, but we are now in per-thread mm_cid mode, which
>     means that the mc->max_cids value depends on the number of threads.

No. It stays in per CPU mode. The mode switch itself happens either in
the worker or on fork/exit whatever comes first.

> 3) At that point, a new thread is cloned, thus invoking
>     sched_mm_cid_fork. Calling sched_mm_cid_add_user increases the user
>     count and invokes mm_update_max_cids, which updates the mc->max_cids
>     limit, but does not set the mc->transit flag because this call does not
>     swap from per-cpu to per-task mode (the mode is already per-task).

No. mm::mm_cid::percpu is still set. So mm::mm_cid::transit is irrelevant.

>     Immediately after the call to sched_mm_cid_add_user, sched_mm_cid_fork()
>     attempts to call mm_get_cid while the mm_cid mutex and mm_cid lock
>     are held, and loops forever because the mm_cid mask has all
>     the max_cids IDs reserved because of the stale per-cpu CIDs.

Definitely not. sched_mm_cid_add_user() invokes mm_update_max_cids()
which does the mode switch in mm_cid, sets transit and returns true,
which means that fork() goes and does the transition game and allocates
the CID for the new task after that completed.

There was an issue in V3 with the not-initialized transit member and a
off by one in one of the transition functions. It's fixed in the git
tree, but I haven't posted it yet because I was AFK for a week.

I did not notice the V3 issue because tests passed on a small machine,
but after I did a rebase to the tip rseq and uaccess bits, I noticed the
failure because I tested on a larger box.

Thanks,

        tglx





Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ