[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6c4162c5-1703-45db-b9ca-96ecd8ce551f@suse.cz>
Date: Mon, 26 Jan 2026 14:42:00 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: David Hildenbrand <david@...nel.org>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>, Mike Rapoport
<rppt@...nel.org>, Suren Baghdasaryan <surenb@...gle.com>,
Michal Hocko <mhocko@...e.com>, Shakeel Butt <shakeel.butt@...ux.dev>,
Jann Horn <jannh@...gle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-rt-devel@...ts.linux.dev,
Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Will Deacon <will@...nel.org>, Boqun Feng <boqun.feng@...il.com>,
Waiman Long <longman@...hat.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Clark Williams <clrkwllms@...nel.org>, Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH v4 09/10] mm/vma: update vma_assert_locked() to use
lockdep
On 1/23/26 21:12, Lorenzo Stoakes wrote:
> We can use lockdep to avoid unnecessary work here, otherwise update the
> code to logically evaluate all pertinent cases and share code with
> vma_assert_write_locked().
>
> Make it clear here that we treat the VMA being detached at this point as a
> bug, this was only implicit before.
>
> Reviewed-by: Suren Baghdasaryan <surenb@...gle.com>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
Nit:
> ---
> include/linux/mmap_lock.h | 41 +++++++++++++++++++++++++++++++++++++--
> 1 file changed, 39 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
> index 23bde4bd5a85..4a0aafc66c5d 100644
> --- a/include/linux/mmap_lock.h
> +++ b/include/linux/mmap_lock.h
> @@ -322,19 +322,56 @@ int vma_start_write_killable(struct vm_area_struct *vma)
> return __vma_start_write(vma, __vma_raw_mm_seqnum(vma), TASK_KILLABLE);
> }
>
> +/**
> + * vma_assert_write_locked() - assert that @vma holds a VMA write lock.
> + * @vma: The VMA to assert.
> + */
> static inline void vma_assert_write_locked(struct vm_area_struct *vma)
> {
> VM_WARN_ON_ONCE_VMA(!__is_vma_write_locked(vma), vma);
> }
>
> +/**
> + * vma_assert_locked() - assert that @vma holds either a VMA read or a VMA write
> + * lock and is not detached.
> + * @vma: The VMA to assert.
> + */
> static inline void vma_assert_locked(struct vm_area_struct *vma)
> {
> + unsigned int refcnt;
> +
> + /*
> + * If read-locked or currently excluding readers, then the VMA is
> + * locked.
> + */
> +#ifdef CONFIG_LOCKDEP
> + if (lock_is_held(&vma->vmlock_dep_map))
> + return;
Wouldn't this work a tiny bit better?
if (!lock_is_held(&vma->vmlock_dep_map))
vma_assert_write_locked(vma);
return;
> +#endif
> +
> /*
> * See the comment describing the vm_area_struct->vm_refcnt field for
> * details of possible refcnt values.
> */
> - VM_WARN_ON_ONCE_VMA(refcount_read(&vma->vm_refcnt) <= 1 &&
> - !__is_vma_write_locked(vma), vma);
> + refcnt = refcount_read(&vma->vm_refcnt);
> +
> + /*
> + * In this case we're either read-locked, write-locked with temporary
> + * readers, or in the midst of excluding readers, all of which means
> + * we're locked.
> + */
> + if (refcnt > 1)
> + return;
> +
> + /* It is a bug for the VMA to be detached here. */
> + VM_WARN_ON_ONCE_VMA(!refcnt, vma);
> +
> + /*
> + * OK, the VMA has a reference count of 1 which means it is either
> + * unlocked and attached or write-locked, so assert that it is
> + * write-locked.
> + */
> + vma_assert_write_locked(vma);
> }
>
> static inline bool vma_is_attached(struct vm_area_struct *vma)
Powered by blists - more mailing lists