[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260128084616.GD3372621@noisy.programming.kicks-ass.net>
Date: Wed, 28 Jan 2026 09:46:16 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Ihor Solodrai <ihor.solodrai@...ux.dev>
Cc: Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
Gabriele Monaco <gmonaco@...hat.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Michael Jeanson <mjeanson@...icios.com>,
Jens Axboe <axboe@...nel.dk>,
"Paul E. McKenney" <paulmck@...nel.org>,
"Gautham R. Shenoy" <gautham.shenoy@....com>,
Florian Weimer <fweimer@...hat.com>,
Tim Chen <tim.c.chen@...el.com>, Yury Norov <yury.norov@...il.com>,
Shrikanth Hegde <sshegde@...ux.ibm.com>, bpf <bpf@...r.kernel.org>,
sched-ext@...ts.linux.dev, Kernel Team <kernel-team@...a.com>,
Alexei Starovoitov <ast@...nel.org>,
Andrii Nakryiko <andrii@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Puranjay Mohan <puranjay@...nel.org>, Tejun Heo <tj@...nel.org>
Subject: Re: [patch V5 00/20] sched: Rewrite MM CID management
On Tue, Jan 27, 2026 at 04:01:11PM -0800, Ihor Solodrai wrote:
> On 11/19/25 9:26 AM, Thomas Gleixner wrote:
> > This is a follow up on the V4 series which can be found here:
> >
> > https://lore.kernel.org/20251104075053.700034556@linutronix.de
> >
> > The V1 cover letter contains a detailed analyisis of the issues:
> >
> > https://lore.kernel.org/20251015164952.694882104@linutronix.de
> >
> > TLDR: The CID management is way to complex and adds significant overhead
> > into scheduler hotpaths.
> >
> > The series rewrites MM CID management in a more simplistic way which
> > focusses on low overhead in the scheduler while maintaining per task CIDs
> > as long as the number of threads is not exceeding the number of possible
> > CPUs.
>
> Hello Thomas, everyone.
>
> BPF CI caught a deadlock on current bpf-next tip (35538dba51b4).
> Job: https://github.com/kernel-patches/bpf/actions/runs/21417415035/job/61670254640
>
> It appears to be related to this series. Pasting a splat below.
>
> Any ideas what might be going on?
That splat is only CPU2, that's not typically very useful in a lockup
scenario.
Powered by blists - more mailing lists