[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250115153507.GF8362@noisy.programming.kicks-ass.net>
Date: Wed, 15 Jan 2025 16:35:07 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: Mateusz Guzik <mjguzik@...il.com>, akpm@...ux-foundation.org,
willy@...radead.org, liam.howlett@...cle.com,
lorenzo.stoakes@...cle.com, david.laight.linux@...il.com,
mhocko@...e.com, vbabka@...e.cz, hannes@...xchg.org,
oliver.sang@...el.com, mgorman@...hsingularity.net,
david@...hat.com, peterx@...hat.com, oleg@...hat.com,
dave@...olabs.net, paulmck@...nel.org, brauner@...nel.org,
dhowells@...hat.com, hdanton@...a.com, hughd@...gle.com,
lokeshgidra@...gle.com, minchan@...gle.com, jannh@...gle.com,
shakeel.butt@...ux.dev, souravpanda@...gle.com,
pasha.tatashin@...een.com, klarasmodin@...il.com,
richard.weiyang@...il.com, corbet@....net,
linux-doc@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH v9 11/17] mm: replace vm_lock and detached flag with a
reference count
On Wed, Jan 15, 2025 at 07:00:37AM -0800, Suren Baghdasaryan wrote:
> On Wed, Jan 15, 2025 at 3:13 AM Peter Zijlstra <peterz@...radead.org> wrote:
> >
> > On Wed, Jan 15, 2025 at 11:48:41AM +0100, Peter Zijlstra wrote:
> > > On Sat, Jan 11, 2025 at 12:14:47PM -0800, Suren Baghdasaryan wrote:
> > >
> > > > > Replacing down_read_trylock() with the new routine loses an acquire
> > > > > fence. That alone is not a problem, but see below.
> > > >
> > > > Hmm. I think this acquire fence is actually necessary. We don't want
> > > > the later vm_lock_seq check to be reordered and happen before we take
> > > > the refcount. Otherwise this might happen:
> > > >
> > > > reader writer
> > > > if (vm_lock_seq == mm_lock_seq) // check got reordered
> > > > return false;
> > > > vm_refcnt += VMA_LOCK_OFFSET
> > > > vm_lock_seq == mm_lock_seq
> > > > vm_refcnt -= VMA_LOCK_OFFSET
> > > > if (!__refcount_inc_not_zero_limited())
> > > > return false;
> > > >
> > > > Both reader's checks will pass and the reader would read-lock a vma
> > > > that was write-locked.
> > >
> > > Hmm, you're right. That acquire does matter here.
> >
> > Notably, it means refcount_t is entirely unsuitable for anything
> > SLAB_TYPESAFE_BY_RCU, since they all will need secondary validation
> > conditions after the refcount succeeds.
>
> Thanks for reviewing, Peter!
> Yes, I'm changing the code to use atomic_t instead of refcount_t and
> it comes out quite nicely I think. I had to add two small helper
> functions:
> vm_refcount_inc() - similar to refcount_add_not_zero() but with an
> acquired fence.
> vm_refcnt_sub() - similar to refcount_sub_and_test(). I could use
> atomic_sub_and_test() but that would add unnecessary acquire fence in
> the pagefault path, so I'm using refcount_sub_and_test() logic
> instead.
Right.
> For SLAB_TYPESAFE_BY_RCU I think we are ok with the
> __vma_enter_locked()/__vma_exit_locked() transition in the
> vma_mark_detached() before freeing the vma and would not need
> secondary validation. In __vma_enter_locked(), vm_refcount gets
> VMA_LOCK_OFFSET set, which prevents readers from taking the refcount.
> In __vma_exit_locked() vm_refcnt transitions to 0, so again that
> prevents readers from taking the refcount. IOW, the readers won't get
> to the secondary validation and will fail early on
> __refcount_inc_not_zero_limited(). I think this transition correctly
> serves the purpose of waiting for current temporary readers to exit
> and preventing new readers from read-locking and using the vma.
Consider:
CPU0 CPU1
rcu_read_lock();
vma = vma_lookup(mm, vaddr);
... cpu goes sleep for a *long time* ...
__vma_exit_locked();
vma_area_free()
..
vma = vma_area_alloc();
vma_mark_attached();
... comes back once vma is re-used ...
vma_start_read()
vm_refcount_inc(); // success!!
At which point we need to validate vma is for mm and covers vaddr, which
is what patch 15 does, no?
Also, I seem to have forgotten some braces back in 2008 :-)
---
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 10a971c2bde3..c1356b52f8ea 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -115,9 +115,10 @@ enum _slab_flag_bits {
* rcu_read_lock();
* obj = lockless_lookup(key);
* if (obj) {
- * if (!try_get_ref(obj)) // might fail for free objects
+ * if (!try_get_ref(obj)) { // might fail for free objects
* rcu_read_unlock();
* goto begin;
+ * }
*
* if (obj->key != key) { // not the object we expected
* put_ref(obj);
Powered by blists - more mailing lists