[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAC_TJvc9HNwBbH9953vnqZQ0zLYvRF4fdZvc1Sp7OYV8bPVOCA@mail.gmail.com>
Date: Thu, 18 Sep 2025 08:43:35 -0700
From: Kalesh Singh <kaleshsingh@...gle.com>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: akpm@...ux-foundation.org, minchan@...nel.org, david@...hat.com,
Liam.Howlett@...cle.com, rppt@...nel.org, pfalcato@...e.de,
kernel-team@...roid.com, android-mm@...gle.com,
Alexander Viro <viro@...iv.linux.org.uk>, Christian Brauner <brauner@...nel.org>, Jan Kara <jack@...e.cz>,
Kees Cook <kees@...nel.org>, Vlastimil Babka <vbabka@...e.cz>, Suren Baghdasaryan <surenb@...gle.com>,
Michal Hocko <mhocko@...e.com>, Steven Rostedt <rostedt@...dmis.org>,
Masami Hiramatsu <mhiramat@...nel.org>, Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>, Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>, Jann Horn <jannh@...gle.com>,
Shuah Khan <shuah@...nel.org>, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-trace-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org
Subject: Re: [PATCH v2 5/7] mm: harden vma_count against direct modification
On Thu, Sep 18, 2025 at 7:52 AM Lorenzo Stoakes
<lorenzo.stoakes@...cle.com> wrote:
>
> On Mon, Sep 15, 2025 at 09:36:36AM -0700, Kalesh Singh wrote:
> > To make VMA counting more robust, prevent direct modification of the
> > mm->vma_count field. This is achieved by making the public-facing
> > member const via a union and requiring all modifications to go
> > through a new set of helper functions the operate on a private
> > __vma_count.
> >
> > While there are no other invariants tied to vma_count currently, this
> > structural change improves maintainability; as it creates a single,
> > centralized point for any future logic, such as adding debug checks
> > or updating related statistics (in subsequent patches).
> >
> > Cc: Andrew Morton <akpm@...ux-foundation.org>
> > Cc: David Hildenbrand <david@...hat.com>
> > Cc: "Liam R. Howlett" <Liam.Howlett@...cle.com>
> > Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
> > Cc: Mike Rapoport <rppt@...nel.org>
> > Cc: Minchan Kim <minchan@...nel.org>
> > Cc: Pedro Falcato <pfalcato@...e.de>
> > Signed-off-by: Kalesh Singh <kaleshsingh@...gle.com>
>
> Hmmm I"m not sure about this one.
>
> I think this is a 'we don't need it' situation, and it's making everything a bit
> uglier.
>
> I especially hate vma_count_add() and vma_count_sub(). You're essentially
> overridding the whole concept in these cases to make stuff that's already in
> place work in those cases
>
> I don't think this really adds much honestly.
Hi Lorenzo,
Thanks for reviewing.
This was done to facilitate adding the assertions. So I'll drop this
along with that in the next version.
Thanks,
Kalesh
>
> (You're also clearly missing cases as the kernel bot has found issues)
>
> > ---
> > include/linux/mm.h | 25 +++++++++++++++++++++++++
> > include/linux/mm_types.h | 5 ++++-
> > kernel/fork.c | 2 +-
> > mm/mmap.c | 2 +-
> > mm/vma.c | 12 ++++++------
> > tools/testing/vma/vma.c | 2 +-
> > tools/testing/vma/vma_internal.h | 30 +++++++++++++++++++++++++++++-
> > 7 files changed, 67 insertions(+), 11 deletions(-)
> >
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index 138bab2988f8..8bad1454984c 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -4219,4 +4219,29 @@ static inline bool snapshot_page_is_faithful(const struct page_snapshot *ps)
> >
> > void snapshot_page(struct page_snapshot *ps, const struct page *page);
> >
> > +static inline void vma_count_init(struct mm_struct *mm)
> > +{
> > + ACCESS_PRIVATE(mm, __vma_count) = 0;
> > +}
> > +
> > +static inline void vma_count_add(struct mm_struct *mm, int nr_vmas)
> > +{
> > + ACCESS_PRIVATE(mm, __vma_count) += nr_vmas;
> > +}
> > +
> > +static inline void vma_count_sub(struct mm_struct *mm, int nr_vmas)
> > +{
> > + vma_count_add(mm, -nr_vmas);
> > +}
> > +
> > +static inline void vma_count_inc(struct mm_struct *mm)
> > +{
> > + vma_count_add(mm, 1);
> > +}
> > +
> > +static inline void vma_count_dec(struct mm_struct *mm)
> > +{
> > + vma_count_sub(mm, 1);
> > +}
> > +
> > #endif /* _LINUX_MM_H */
> > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> > index 4343be2f9e85..2ea8fc722aa2 100644
> > --- a/include/linux/mm_types.h
> > +++ b/include/linux/mm_types.h
> > @@ -1020,7 +1020,10 @@ struct mm_struct {
> > #ifdef CONFIG_MMU
> > atomic_long_t pgtables_bytes; /* size of all page tables */
> > #endif
> > - int vma_count; /* number of VMAs */
> > + union {
> > + const int vma_count; /* number of VMAs */
> > + int __private __vma_count;
> > + };
> >
> > spinlock_t page_table_lock; /* Protects page tables and some
> > * counters
> > diff --git a/kernel/fork.c b/kernel/fork.c
> > index 8fcbbf947579..ea9eff416e51 100644
> > --- a/kernel/fork.c
> > +++ b/kernel/fork.c
> > @@ -1037,7 +1037,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
> > mmap_init_lock(mm);
> > INIT_LIST_HEAD(&mm->mmlist);
> > mm_pgtables_bytes_init(mm);
> > - mm->vma_count = 0;
> > + vma_count_init(mm);
> > mm->locked_vm = 0;
> > atomic64_set(&mm->pinned_vm, 0);
> > memset(&mm->rss_stat, 0, sizeof(mm->rss_stat));
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index c6769394a174..30ddd550197e 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -1828,7 +1828,7 @@ __latent_entropy int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm)
> > */
> > vma_iter_bulk_store(&vmi, tmp);
> >
> > - mm->vma_count++;
> > + vma_count_inc(mm);
> >
> > if (tmp->vm_ops && tmp->vm_ops->open)
> > tmp->vm_ops->open(tmp);
> > diff --git a/mm/vma.c b/mm/vma.c
> > index 64f4e7c867c3..0cd3cb472220 100644
> > --- a/mm/vma.c
> > +++ b/mm/vma.c
> > @@ -352,7 +352,7 @@ static void vma_complete(struct vma_prepare *vp, struct vma_iterator *vmi,
> > * (it may either follow vma or precede it).
> > */
> > vma_iter_store_new(vmi, vp->insert);
> > - mm->vma_count++;
> > + vma_count_inc(mm);
> > }
> >
> > if (vp->anon_vma) {
> > @@ -383,7 +383,7 @@ static void vma_complete(struct vma_prepare *vp, struct vma_iterator *vmi,
> > }
> > if (vp->remove->anon_vma)
> > anon_vma_merge(vp->vma, vp->remove);
> > - mm->vma_count--;
> > + vma_count_dec(mm);
> > mpol_put(vma_policy(vp->remove));
> > if (!vp->remove2)
> > WARN_ON_ONCE(vp->vma->vm_end < vp->remove->vm_end);
> > @@ -1266,7 +1266,7 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
> > struct mm_struct *mm;
> >
> > mm = current->mm;
> > - mm->vma_count -= vms->vma_count;
> > + vma_count_sub(mm, vms->vma_count);
> > mm->locked_vm -= vms->locked_vm;
> > if (vms->unlock)
> > mmap_write_downgrade(mm);
> > @@ -1795,7 +1795,7 @@ int vma_link(struct mm_struct *mm, struct vm_area_struct *vma)
> > vma_start_write(vma);
> > vma_iter_store_new(&vmi, vma);
> > vma_link_file(vma);
> > - mm->vma_count++;
> > + vma_count_inc(mm);
> > validate_mm(mm);
> > return 0;
> > }
> > @@ -2495,7 +2495,7 @@ static int __mmap_new_vma(struct mmap_state *map, struct vm_area_struct **vmap)
> > /* Lock the VMA since it is modified after insertion into VMA tree */
> > vma_start_write(vma);
> > vma_iter_store_new(vmi, vma);
> > - map->mm->vma_count++;
> > + vma_count_inc(map->mm);
> > vma_link_file(vma);
> >
> > /*
> > @@ -2810,7 +2810,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
> > if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL))
> > goto mas_store_fail;
> >
> > - mm->vma_count++;
> > + vma_count_inc(mm);
> > validate_mm(mm);
> > out:
> > perf_event_mmap(vma);
> > diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c
> > index 69fa7d14a6c2..ee5a1e2365e0 100644
> > --- a/tools/testing/vma/vma.c
> > +++ b/tools/testing/vma/vma.c
> > @@ -261,7 +261,7 @@ static int cleanup_mm(struct mm_struct *mm, struct vma_iterator *vmi)
> > }
> >
> > mtree_destroy(&mm->mm_mt);
> > - mm->vma_count = 0;
> > + vma_count_init(mm);
> > return count;
> > }
> >
> > diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h
> > index 15525b86145d..6e724ba1adf4 100644
> > --- a/tools/testing/vma/vma_internal.h
> > +++ b/tools/testing/vma/vma_internal.h
> > @@ -251,7 +251,10 @@ struct mutex {};
> >
> > struct mm_struct {
> > struct maple_tree mm_mt;
> > - int vma_count; /* number of VMAs */
> > + union {
> > + const int vma_count; /* number of VMAs */
> > + int __vma_count;
> > + };
> > unsigned long total_vm; /* Total pages mapped */
> > unsigned long locked_vm; /* Pages that have PG_mlocked set */
> > unsigned long data_vm; /* VM_WRITE & ~VM_SHARED & ~VM_STACK */
> > @@ -1526,4 +1529,29 @@ static int vma_count_remaining(const struct mm_struct *mm)
> > return (max_count > vma_count) ? (max_count - vma_count) : 0;
> > }
> >
> > +static inline void vma_count_init(struct mm_struct *mm)
> > +{
> > + mm->__vma_count = 0;
> > +}
> > +
> > +static inline void vma_count_add(struct mm_struct *mm, int nr_vmas)
> > +{
> > + mm->__vma_count += nr_vmas;
> > +}
> > +
> > +static inline void vma_count_sub(struct mm_struct *mm, int nr_vmas)
> > +{
> > + vma_count_add(mm, -nr_vmas);
> > +}
> > +
> > +static inline void vma_count_inc(struct mm_struct *mm)
> > +{
> > + vma_count_add(mm, 1);
> > +}
> > +
> > +static inline void vma_count_dec(struct mm_struct *mm)
> > +{
> > + vma_count_sub(mm, 1);
> > +}
> > +
> > #endif /* __MM_VMA_INTERNAL_H */
> > --
> > 2.51.0.384.g4c02a37b29-goog
> >
Powered by blists - more mailing lists