[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <877bwgw9yf.ffs@tglx>
Date: Mon, 27 Oct 2025 18:06:32 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Pingfan Liu <piliu@...hat.com>, kexec@...ts.infradead.org,
linux-kernel@...r.kernel.org
Cc: Pingfan Liu <piliu@...hat.com>, Waiman Long <longman@...hat.com>, Peter
Zijlstra <peterz@...radead.org>, Juri Lelli <juri.lelli@...hat.com>,
Pierre Gondois <pierre.gondois@....com>, Andrew Morton
<akpm@...ux-foundation.org>, Baoquan He <bhe@...hat.com>, Ingo Molnar
<mingo@...hat.com>, Vincent Guittot <vincent.guittot@...aro.org>, Dietmar
Eggemann <dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>,
Valentin Schneider <vschneid@...hat.com>, "Rafael J. Wysocki"
<rafael.j.wysocki@...el.com>, Joel Granados <joel.granados@...nel.org>
Subject: Re: [RFC 2/3] kernel/cpu: Mark nonboot cpus as inactive when
shutting down nonboot cpus
On Wed, Oct 22 2025 at 20:13, Pingfan Liu wrote:
> The previous patch lifted the deadline bandwidth check during the kexec
Once this is applied 'The previous patch' is meaningless.
> process, which raises a potential issue: as the number of online CPUs
> decreases, DL tasks may be crowded onto a few CPUs, which may starve the
> CPU hotplug kthread. As a result, the hot-removal cannot proceed in
> practice. On the other hand, as CPUs are offlined one by one, all tasks
> will eventually be migrated to the kexec CPU.
>
> Therefore, this patch marks all other CPUs as inactive to signal the
git grep "This patch" Documentation/process/
> scheduler to migrate tasks to the kexec CPU during hot-removal.
I'm not seeing what this solves. It just changes the timing of moving
tasks off to the boot CPU where they compete for the CPU for nothing.
When kexec() is in progress, then running user space tasks at all is a
completely pointless exercise.
So the obvious solution to the problem is to freeze all user space tasks
when kexec() is invoked. No horrible hacks in the deadline scheduler and
elsewhere required to make that work. No?
Thanks,
tglx
Powered by blists - more mailing lists