lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250211113827.302fd066@gandalf.local.home>
Date: Tue, 11 Feb 2025 11:38:27 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: Andrea Righi <arighi@...dia.com>
Cc: Yury Norov <yury.norov@...il.com>, Tejun Heo <tj@...nel.org>, David
 Vernet <void@...ifault.com>, Changwoo Min <changwoo@...lia.com>, Ingo
 Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>, Juri
 Lelli <juri.lelli@...hat.com>, Vincent Guittot
 <vincent.guittot@...aro.org>, Dietmar Eggemann <dietmar.eggemann@....com>,
 Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, Valentin
 Schneider <vschneid@...hat.com>, Ian May <ianm@...dia.com>,
 bpf@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/6] sched_ext: idle: Per-node idle cpumasks

On Tue, 11 Feb 2025 15:45:15 +0100
Andrea Righi <arighi@...dia.com> wrote:

> ...which is basically this (with GFP_ATOMIC):
> 
> [   11.829079] =============================
> [   11.829109] [ BUG: Invalid wait context ]
> [   11.829146] 6.13.0-virtme #51 Not tainted
> [   11.829185] -----------------------------
> [   11.829243] fish/344 is trying to lock:
> [   11.829285] ffff9659bec450b0 (&c->lock){..-.}-{3:3}, at: ___slab_alloc+0x66/0x1510
> [   11.829380] other info that might help us debug this:
> [   11.829450] context-{5:5}
> [   11.829494] 8 locks held by fish/344:
> [   11.829534]  #0: ffff965a409c70a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x28/0x60
> [   11.829643]  #1: ffff965a409c7130 (&tty->atomic_write_lock){+.+.}-{4:4}, at: file_tty_write.isra.0+0xa1/0x330
> [   11.829765]  #2: ffff965a409c72e8 (&tty->termios_rwsem/1){++++}-{4:4}, at: n_tty_write+0x9e/0x510
> [   11.829871]  #3: ffffbc6d01433380 (&ldata->output_lock){+.+.}-{4:4}, at: n_tty_write+0x1f1/0x510
> [   11.829979]  #4: ffffffffb556b5c0 (rcu_read_lock){....}-{1:3}, at: __queue_work+0x59/0x680
> [   11.830173]  #5: ffff9659800f0018 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0xd7/0x680
> [   11.830286]  #6: ffff9659801bcf60 (&p->pi_lock){-.-.}-{2:2}, at: try_to_wake_up+0x56/0x920
> [   11.830396]  #7: ffffffffb556b5c0 (rcu_read_lock){....}-{1:3}, at: scx_select_cpu_dfl+0x56/0x460
> 
> And I think that's because:
> 
>  * %GFP_ATOMIC users can not sleep and need the allocation to succeed. A lower
>  * watermark is applied to allow access to "atomic reserves".
>  * The current implementation doesn't support NMI and few other strict
>  * non-preemptive contexts (e.g. raw_spin_lock). The same applies to %GFP_NOWAIT.
> 
> So I guess we the only viable option is to preallocate nodemask_t and
> protect it somehow, hoping that it doesn't add too much overhead...

I believe it's because you have p->pi_lock which is a raw_spin_lock() and
you are trying to take a lock in ___slab_alloc() which I bet is a normal
spin_lock(). In PREEMPT_RT() that turns into a mutex, and you can not take
a spin_lock while holding a raw_spin_lock.

-- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ