[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpEE6NTAds-77Y=LVB6Q6CJy_1Ewq5_DsQ1pmXJGVCakEA@mail.gmail.com>
Date: Wed, 15 Jan 2025 08:22:12 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Mateusz Guzik <mjguzik@...il.com>, akpm@...ux-foundation.org, willy@...radead.org,
liam.howlett@...cle.com, lorenzo.stoakes@...cle.com,
david.laight.linux@...il.com, mhocko@...e.com, vbabka@...e.cz,
hannes@...xchg.org, oliver.sang@...el.com, mgorman@...hsingularity.net,
david@...hat.com, peterx@...hat.com, oleg@...hat.com, dave@...olabs.net,
paulmck@...nel.org, brauner@...nel.org, dhowells@...hat.com, hdanton@...a.com,
hughd@...gle.com, lokeshgidra@...gle.com, minchan@...gle.com,
jannh@...gle.com, shakeel.butt@...ux.dev, souravpanda@...gle.com,
pasha.tatashin@...een.com, klarasmodin@...il.com, richard.weiyang@...il.com,
corbet@....net, linux-doc@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH v9 11/17] mm: replace vm_lock and detached flag with a
reference count
On Wed, Jan 15, 2025 at 7:38 AM Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Wed, Jan 15, 2025 at 04:35:07PM +0100, Peter Zijlstra wrote:
>
> > Consider:
> >
> > CPU0 CPU1
> >
> > rcu_read_lock();
> > vma = vma_lookup(mm, vaddr);
> >
> > ... cpu goes sleep for a *long time* ...
> >
> > __vma_exit_locked();
> > vma_area_free()
> > ..
> > vma = vma_area_alloc();
> > vma_mark_attached();
> >
> > ... comes back once vma is re-used ...
> >
> > vma_start_read()
> > vm_refcount_inc(); // success!!
> >
> > At which point we need to validate vma is for mm and covers vaddr, which
> > is what patch 15 does, no?
Correct. Sorry, I thought by "secondary validation" you only meant
vm_lock_seq check in vma_start_read(). Now I understand your point.
Yes, if the vma we found gets reused before we read-lock it then the
checks for mm and address range should catch a possibly incorrect vma.
If these checks fail, we retry. If they succeed we have the correct
vma even if it was recycled since we found it.
>
> Also, critically, we want these reads to happen *after* the refcount
> increment.
Yes, and I think the acquire fence in the
refcount_add_not_zero_limited() replacement should guarantee that
ordering.
>
Powered by blists - more mailing lists