[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f7abdb4c-b7e2-4fe4-9198-f313d0cacacb@lucifer.local>
Date: Thu, 13 Nov 2025 11:05:28 +0000
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: Matthew Wilcox <willy@...radead.org>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Suren Baghdasaryan <surenb@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Shakeel Butt <shakeel.butt@...ux.dev>, Jann Horn <jannh@...gle.com>,
stable@...r.kernel.org,
syzbot+131f9eb2b5807573275c@...kaller.appspotmail.com
Subject: Re: [PATCH] mm/mmap_lock: Reset maple state on lock_vma_under_rcu()
retry
On Wed, Nov 12, 2025 at 05:27:22PM -0800, Paul E. McKenney wrote:
> On Thu, Nov 13, 2025 at 12:04:19AM +0000, Matthew Wilcox wrote:
> > On Wed, Nov 12, 2025 at 03:06:38PM +0000, Lorenzo Stoakes wrote:
> > > > Any time the rcu read lock is dropped, the maple state must be
> > > > invalidated. Resetting the address and state to MA_START is the safest
> > > > course of action, which will result in the next operation starting from
> > > > the top of the tree.
> > >
> > > Since we all missed it I do wonder if we need some super clear comment
> > > saying 'hey if you drop + re-acquire RCU lock you MUST revalidate mas state
> > > by doing 'blah'.
> >
> > I mean, this really isn't an RCU thing. This is also bad:
> >
> > spin_lock(a);
> > p = *q;
> > spin_unlock(a);
> > spin_lock(a);
> > b = *p;
> >
> > p could have been freed while you didn't hold lock a. Detecting this
> > kind of thing needs compiler assistence (ie Rust) to let you know that
> > you don't have the right to do that any more.
>
> While in no way denigrating Rust's compile-time detection of this sort
> of thing, use of KASAN combined with CONFIG_RCU_STRICT_GRACE_PERIOD=y
> (which restricts you to four CPUs) can sometimes help.
>
> > > I think one source of confusion for me with maple tree operations is - what
> > > to do if we are in a position where some kind of reset is needed?
> > >
> > > So even if I'd realised 'aha we need to reset this' it wouldn't be obvious
> > > to me that we ought to set to the address.
> >
> > I think that's a separate problem.
> >
> > > > +++ b/mm/mmap_lock.c
> > > > @@ -257,6 +257,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> > > > if (PTR_ERR(vma) == -EAGAIN) {
> > > > count_vm_vma_lock_event(VMA_LOCK_MISS);
> > > > /* The area was replaced with another one */
> > > > + mas_set(&mas, address);
> > >
> > > I wonder if we could detect that the RCU lock was released (+ reacquired) in
> > > mas_walk() in a debug mode, like CONFIG_VM_DEBUG_MAPLE_TREE?
> >
> > Dropping and reacquiring the RCU read lock should have been a big red
> > flag. I didn't have time to review the patches, but if I had, I would
> > have suggested passing the mas down to the routine that drops the rcu
> > read lock so it can be invalidated before dropping the readlock.
>
> There has been some academic efforts to check for RCU-protected pointers
> leaking from one RCU read-side critical section to another, but nothing
> useful has come from this. :-/
Ugh a pity. I was hoping we could do (in debug mode only obv) something
absolutely roughly like:
On init:
mas->rcu_critical_section = rcu_get_critical_section_blah();
...
On walk:
VM_WARN_ON(rcu_critical_section_blah() != mas->rcu_critical_section);
But sounds like that isn't feasible.
I always like the idea of us having debug stuff that helps highlight dumb
mistakes very quickly, no matter how silly they might be :)
>
> But rcu_pointer_handoff() and unrcu_pointer() are intended not only for
> documentation, but also to suppress the inevitable false positives should
> anyone figure out how to detect leaking of RCU-protected pointers.
>
> Thanx, Paul
Cheers, Lorenzo
Powered by blists - more mailing lists