lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251024145813.3535989-1-japo@linux.ibm.com>
Date: Fri, 24 Oct 2025 16:58:13 +0200
From: Jan Polensky <japo@...ux.ibm.com>
To: peterz@...radead.org
Cc: arighi@...dia.com, bsegall@...gle.com, cgroups@...r.kernel.org,
        changwoo@...lia.com, dietmar.eggemann@....com, hannes@...xchg.org,
        juri.lelli@...hat.com, linux-kernel@...r.kernel.org,
        liuwenfang@...or.com, longman@...hat.com, mgorman@...e.de,
        mingo@...nel.org, mkoutny@...e.com, rostedt@...dmis.org,
        sched-ext@...ts.linux.dev, tglx@...utronix.de, tj@...nel.org,
        vincent.guittot@...aro.org, void@...ifault.com, vschneid@...hat.com
Subject: [REGRESSION] Deadlock during CPU hotplug caused by abfc01077df6

We've identified a regression introduced by commit abfc01077df6 ("sched: Fix
do_set_cpus_allowed() locking") that causes a reproducible deadlock during CPU
hotplug testing on s390x.

While running the cpuhotplug02.sh test from LTP, which dynamically
offlines and onlines CPUs, the system consistently enters a stalled
state.

Observed behavior:
- migration/N attempts to migrate a task currently executing on another
  CPU.
- Concurrently, rcu_sched tries to complete an RCU grace period.
- Both threads are blocked on spinlocks (e.g., arch_spin_lock_wait),
  likely due to lock contention.
- Neither thread progresses; the grace period stalls.
- The kernel detects the stall and triggers a crash dump.

Sys info:
	RELEASE: 6.18.0-20251021.rc2.git224.fe45352cd106.63.fc42.s390x+next
	CPUS: 32
	TASKS: 623
	MEMORY: 16 GB

Crash log excerpt:
    [ 6146.992159] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
    [ 6146.992173] rcu:     1-...0: (5 ticks this GP) idle=cea4/1/0x4000000000000000 softirq=1055899/1055901 fqs=4769
    [ 6146.992236] rcu:     (detected by 3, t=240013 jiffies, g=2041729, q=14778 ncpus=32)
    [ 6146.992240] Task dump for CPU 1:
    [ 6146.992241] task:migration/1     state:R  running task     stack:0     pid:22    tgid:22    ppid:2      task_flags:0x4208040 flags:0x00000000
    [ 6146.992246] Stopper: __balance_push_cpu_stop+0x0/0x230 <- balance_push+0xea/0x170
    [ 6146.992254] Call Trace:
    [ 6146.992255]  [<000000009d9e2300>] 0x9d9e2300
    [ 6146.992280] rcu: rcu_sched kthread starved for 210010 jiffies! g2041729 f0x2 RCU_GP_DOING_FQS(6) ->state=0x0 ->cpu=23
    [ 6146.992287] rcu:     Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior.
    [ 6146.992288] rcu: RCU grace-period kthread stack dump:
    [ 6146.992289] task:rcu_sched       state:R  running task     stack:0     pid:16    tgid:16    ppid:2      task_flags:0x208040 flags:0x00000010
    [ 6146.992294] Call Trace:
    [ 6146.992295]  [<0700000000000001>] 0x700000000000001
    [ 6146.992298]  [<000002e1fb072998>] arch_spin_lock_wait+0xc8/0x110
    [ 6146.992303]  [<000002e1fa239d06>] raw_spin_rq_lock_nested+0x96/0xc0
    [ 6146.992306]  [<000002e1fa23bc90>] resched_cpu+0x50/0xc0
    [ 6146.992309]  [<000002e1fa29d646>] force_qs_rnp+0x306/0x3e0
    [ 6146.992314]  [<000002e1fa29ed30>] rcu_gp_fqs_loop+0x430/0x6e0
    [ 6146.992316]  [<000002e1fa2a1b0e>] rcu_gp_kthread+0x1ee/0x270
    [ 6146.992320]  [<000002e1fa228edc>] kthread+0x12c/0x250
    [ 6146.992323]  [<000002e1fa19ccfc>] __ret_from_fork+0x3c/0x150
    [ 6146.992328]  [<000002e1fb0800ba>] ret_from_fork+0xa/0x30


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ