lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20260123100625.GK171111@noisy.programming.kicks-ass.net>
Date: Fri, 23 Jan 2026 11:06:25 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Chen Jinghuang <chenjinghuang2@...wei.com>
Cc: mingo@...hat.com, juri.lelli@...hat.com, vincent.guittot@...aro.org,
	dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
	mgorman@...e.de, vschneid@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [RESEND] sched/rt: Skip currently executing CPU in rto_next_cpu()

On Thu, Jan 22, 2026 at 01:25:33AM +0000, Chen Jinghuang wrote:
> CPU0 becomes overloaded when hosting a CPU-bound RT task, a non-CPU-bound
> RT task, and a CFS task stuck in kernel space. When other CPUs switch from
> RT to non-RT tasks, RT load balancing (LB) is triggered; with
> HAVE_RT_PUSH_IPI enabled, they send IPIs to CPU0 to drive the execution
> of rto_push_irq_work_func. During push_rt_task on CPU0,
> if next_task->prio < rq->donor->prio, resched_curr() sets NEED_RESCHED
> and after the push operation completes, CPU0 calls rto_next_cpu().
> Since only CPU0 is overloaded in this scenario, rto_next_cpu() should
> ideally return -1 (no further IPI needed).
> 
> However, multiple CPUs invoking tell_cpu_to_push() during LB increments
> rd->rto_loop_next. Even when rd->rto_cpu is set to -1, the mismatch between
> rd->rto_loop and rd->rto_loop_next forces rto_next_cpu() to restart its
> search from -1. With CPU0 remaining overloaded (satisfying rt_nr_migratory
> && rt_nr_total > 1), it gets reselected, causing CPU0 to queue irq_work to
> itself and send self-IPIs repeatedly. As long as CPU0 stays overloaded and
> other CPUs run pull_rt_tasks(), it falls into an infinite self-IPI loop,
> which triggers a CPU hardlockup due to continuous self-interrupts.
> 
> The trigging scenario is as follows:
> 
>          cpu0                      cpu1                    cpu2
>                                 pull_rt_task
>                               tell_cpu_to_push
>                  <------------irq_work_queue_on
> rto_push_irq_work_func
>        push_rt_task
>     resched_curr(rq)                                   pull_rt_task
>     rto_next_cpu                                     tell_cpu_to_push
>                       <-------------------------- atomic_inc(rto_loop_next)
> rd->rto_loop != next
>      rto_next_cpu
>    irq_work_queue_on
> rto_push_irq_work_func
> 
> Fix redundant self-IPI by filtering the initiating CPU in rto_next_cpu().
> This solution has been verified to effectively eliminate spurious self-IPIs
> and prevent CPU hardlockup scenarios.
> 
> Fixes: 4bdced5c9a29 ("sched/rt: Simplify the IPI based RT balancing logic")
> Suggested-by: Steven Rostedt (Google) <rostedt@...dmis.org>
> Suggested-by: K Prateek Nayak <kprateek.nayak@....com>
> Signed-off-by: Chen Jinghuang <chenjinghuang2@...wei.com>
> Reviewed-by: Steven Rostedt (Google) <rostedt@...dmis.org>
> Reviewed-by: Valentin Schneider <vschneid@...hat.com>
> ---

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ