[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpGuvyRj6zxPuJDzCcGz_-x7feycsCkNQT0+Nzn4oUYcSA@mail.gmail.com>
Date: Sun, 14 Sep 2025 17:43:54 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: Chris Mason <clm@...a.com>
Cc: akpm@...ux-foundation.org, Liam.Howlett@...cle.com,
lorenzo.stoakes@...cle.com, david@...hat.com, vbabka@...e.cz,
peterx@...hat.com, jannh@...gle.com, hannes@...xchg.org, mhocko@...nel.org,
paulmck@...nel.org, shuah@...nel.org, adobriyan@...il.com, brauner@...nel.org,
josef@...icpanda.com, yebin10@...wei.com, linux@...ssschuh.net,
willy@...radead.org, osalvador@...e.de, andrii@...nel.org,
ryan.roberts@....com, christophe.leroy@...roup.eu, tjmercier@...gle.com,
kaleshsingh@...gle.com, aha310510@...il.com, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kselftest@...r.kernel.org, SeongJae Park <sj@...nel.org>
Subject: Re: [PATCH v4 3/3] fs/proc/task_mmu: execute PROCMAP_QUERY ioctl
under per-vma locks
On Sun, Sep 14, 2025 at 6:24 AM Chris Mason <clm@...a.com> wrote:
>
> On Fri, 8 Aug 2025 08:28:49 -0700 Suren Baghdasaryan <surenb@...gle.com> wrote:
>
> > Utilize per-vma locks to stabilize vma after lookup without taking
> > mmap_lock during PROCMAP_QUERY ioctl execution. If vma lock is
> > contended, we fall back to mmap_lock but take it only momentarily
> > to lock the vma and release the mmap_lock. In a very unlikely case
> > of vm_refcnt overflow, this fall back path will fail and ioctl is
> > done under mmap_lock protection.
> >
> > This change is designed to reduce mmap_lock contention and prevent
> > PROCMAP_QUERY ioctl calls from blocking address space updates.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> > Acked-by: SeongJae Park <sj@...nel.org>
> > ---
> > fs/proc/task_mmu.c | 103 +++++++++++++++++++++++++++++++++++++--------
> > 1 file changed, 85 insertions(+), 18 deletions(-)
> >
> > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > index c0968d293b61..e64cf40ce9c4 100644
> > --- a/fs/proc/task_mmu.c
> > +++ b/fs/proc/task_mmu.c
> > @@ -132,6 +132,12 @@ static void release_task_mempolicy(struct proc_maps_private *priv)
>
> [ ... ]
>
> > +static struct vm_area_struct *query_vma_find_by_addr(struct proc_maps_locking_ctx *lock_ctx,
> > + unsigned long addr)
> > +{
> > + struct mm_struct *mm = lock_ctx->mm;
> > + struct vm_area_struct *vma;
> > + struct vma_iterator vmi;
> > +
> > + if (lock_ctx->mmap_locked)
> > + return find_vma(mm, addr);
> > +
> > + /* Unlock previously locked VMA and find the next one under RCU */
> > + unlock_ctx_vma(lock_ctx);
> > + rcu_read_lock();
> > + vma_iter_init(&vmi, mm, addr);
> > + vma = lock_next_vma(mm, &vmi, addr);
> > + rcu_read_unlock();
> > +
> > + if (!vma)
> > + return NULL;
> > +
> > + if (!IS_ERR(vma)) {
> > + lock_ctx->locked_vma = vma;
> > + return vma;
> > + }
> > +
> > + if (PTR_ERR(vma) == -EAGAIN) {
> > + /* Fallback to mmap_lock on vma->vm_refcnt overflow */
> > + mmap_read_lock(mm);
>
> I know it's just a (very rare) fallback, but should we be using
> mmap_read_lock_killable() for consistency? I can see this impacting oom
> kills or other times we really want to be able to get rid of procs.
That's a good idea. From a quick look it seems safe to fail with
-EINTR here, which will propagate all the way to do_procmap_query().
Do you want to post a fixup patch?
Thanks,
Suren.
>
> -chris
Powered by blists - more mailing lists