[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251217134931.60601faf.gary@garyguo.net>
Date: Wed, 17 Dec 2025 13:49:31 +0000
From: Gary Guo <gary@...yguo.net>
To: Alice Ryhl <aliceryhl@...gle.com>
Cc: Matthew Wilcox <willy@...radead.org>, "Liam R. Howlett"
<Liam.Howlett@...cle.com>, Andrew Ballance <andrewjballance@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>, Miguel Ojeda <ojeda@...nel.org>,
Boqun Feng <boqun.feng@...il.com>, "Björn Roy Baron"
<bjorn3_gh@...tonmail.com>, Benno Lossin <lossin@...nel.org>, Andreas
Hindborg <a.hindborg@...nel.org>, Trevor Gross <tmgross@...ch.edu>, Danilo
Krummrich <dakr@...nel.org>, maple-tree@...ts.infradead.org,
linux-mm@...ck.org, rust-for-linux@...r.kernel.org,
linux-kernel@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [PATCH] rust: maple_tree: rcu_read_lock() in destructor to
silence lockdep
On Wed, 17 Dec 2025 13:10:37 +0000
Alice Ryhl <aliceryhl@...gle.com> wrote:
> When running the Rust maple tree kunit tests with lockdep, you may
> trigger a warning that looks like this:
>
> lib/maple_tree.c:780 suspicious rcu_dereference_check() usage!
>
> other info that might help us debug this:
>
> rcu_scheduler_active = 2, debug_locks = 1
> no locks held by kunit_try_catch/344.
>
> stack backtrace:
> CPU: 3 UID: 0 PID: 344 Comm: kunit_try_catch Tainted: G N 6.19.0-rc1+ #2 NONE
> Tainted: [N]=TEST
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.17.0-0-gb52ca86e094d-prebuilt.qemu.org 04/01/2014
> Call Trace:
> <TASK>
> dump_stack_lvl+0x71/0x90
> lockdep_rcu_suspicious+0x150/0x190
> mas_start+0x104/0x150
> mas_find+0x179/0x240
> _RINvNtCs5QSdWC790r4_4core3ptr13drop_in_placeINtNtCs1cdwasc6FUb_6kernel10maple_tree9MapleTreeINtNtNtBL_5alloc4kbox3BoxlNtNtB1x_9allocator7KmallocEEECsgxAQYCfdR72_25doctests_kernel_generated+0xaf/0x130
> rust_doctest_kernel_maple_tree_rs_0+0x600/0x6b0
> ? lock_release+0xeb/0x2a0
> ? kunit_try_catch_run+0x210/0x210
> kunit_try_run_case+0x74/0x160
> ? kunit_try_catch_run+0x210/0x210
> kunit_generic_run_threadfn_adapter+0x12/0x30
> kthread+0x21c/0x230
> ? __do_trace_sched_kthread_stop_ret+0x40/0x40
> ret_from_fork+0x16c/0x270
> ? __do_trace_sched_kthread_stop_ret+0x40/0x40
> ret_from_fork_asm+0x11/0x20
> </TASK>
>
> This is because the destructor of maple tree calls mas_find() without
> taking rcu_read_lock() or the spinlock. Doing that is actually ok in
> this case since the destructor has exclusive access to the entire maple
> tree, but it triggers a lockdep warning. To fix that, take the rcu read
> lock.
>
> In the future, it's possible that memory reclaim could gain a feature
> where it reallocates entries in maple trees even if no user-code is
> touching it. If that feature is added, then this use of rcu read lock
> would become load-bearing, so I did not make it conditional on lockdep.
>
> We have to repeatedly take and release rcu because the destructor of T
> might perform operations that sleep.
>
> Reported-by: Andreas Hindborg <a.hindborg@...nel.org>
> Closes: https://rust-for-linux.zulipchat.com/#narrow/channel/x/topic/x/near/564215108
> Fixes: da939ef4c494 ("rust: maple_tree: add MapleTree")
> Cc: stable@...r.kernel.org
> Signed-off-by: Alice Ryhl <aliceryhl@...gle.com>
Reviewed-by: Gary Guo <gary@...yguo.net>
> ---
> Intended for the same tree as any other maple tree patch. (I believe
> that's Andrew Morton's tree.)
> ---
> rust/kernel/maple_tree.rs | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/rust/kernel/maple_tree.rs b/rust/kernel/maple_tree.rs
> index e72eec56bf5772ada09239f47748cd649212d8b0..265d6396a78a17886c8b5a3ebe7ba39ccc354add 100644
> --- a/rust/kernel/maple_tree.rs
> +++ b/rust/kernel/maple_tree.rs
> @@ -265,7 +265,16 @@ unsafe fn free_all_entries(self: Pin<&mut Self>) {
> loop {
> // This uses the raw accessor because we're destroying pointers without removing them
> // from the maple tree, which is only valid because this is the destructor.
> - let ptr = ma_state.mas_find_raw(usize::MAX);
> + //
> + // Take the rcu lock because mas_find_raw() requires that you hold either the spinlock
> + // or the rcu read lock. This is only really required if memory reclaim might
> + // reallocate entries in the tree, as we otherwise have exclusive access. That feature
> + // doesn't exist yet, so for now, taking the rcu lock only serves the purpose of
> + // silencing lockdep.
> + let ptr = {
> + let _rcu = kernel::sync::rcu::Guard::new();
> + ma_state.mas_find_raw(usize::MAX)
> + };
> if ptr.is_null() {
> break;
> }
>
> ---
> base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
> change-id: 20251217-maple-drop-rcu-dfe72fb5f49e
>
> Best regards,
Powered by blists - more mailing lists