[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4yPg2AOxjorD3RPyu=Ko+7gpU1=-XWqQohvLWgGrzAEDQ@mail.gmail.com>
Date: Wed, 18 Jun 2025 18:11:26 +0800
From: Barry Song <21cnbao@...il.com>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Barry Song <v-songbaohua@...o.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>, David Hildenbrand <david@...hat.com>,
Vlastimil Babka <vbabka@...e.cz>, Jann Horn <jannh@...gle.com>,
Suren Baghdasaryan <surenb@...gle.com>, Lokesh Gidra <lokeshgidra@...gle.com>,
Tangquan Zheng <zhengtangquan@...o.com>, Qi Zheng <zhengqi.arch@...edance.com>,
Lance Yang <ioworker0@...il.com>
Subject: Re: [PATCH v4] mm: use per_vma lock for MADV_DONTNEED
On Tue, Jun 17, 2025 at 9:39 PM Lorenzo Stoakes
<lorenzo.stoakes@...cle.com> wrote:
>
> +cc Lance
>
> Hi Barry,
>
> This needs a quick fixpatch, as discovered by Lance in [0], which I did an
> analysis on [1].
>
> Basically, _theoretically_ though not currently in practice, we might end up
> accessing uninitialised state in the struct vm_area_struct **prev value passed
> around madvise.
>
> The solution for now is to simply initialise it in the VMA read lock case, as
> all users of this set *prev = vma prior to performing the operation.
>
> Cheers, Lorenzo
>
> [0]: https://lore.kernel.org/all/20250617020544.57305-1-lance.yang@linux.dev/
> [1]: https://lore.kernel.org/all/6181fd25-6527-4cd0-b67f-2098191d262d@lucifer.local/
>
> On Sun, Jun 08, 2025 at 10:01:50AM +1200, Barry Song wrote:
> > From: Barry Song <v-songbaohua@...o.com>
> >
> > Certain madvise operations, especially MADV_DONTNEED, occur far more
> > frequently than other madvise options, particularly in native and Java
> > heaps for dynamic memory management.
> >
> > Currently, the mmap_lock is always held during these operations, even when
> > unnecessary. This causes lock contention and can lead to severe priority
> > inversion, where low-priority threads—such as Android's HeapTaskDaemon—
> > hold the lock and block higher-priority threads.
> >
> > This patch enables the use of per-VMA locks when the advised range lies
> > entirely within a single VMA, avoiding the need for full VMA traversal. In
> > practice, userspace heaps rarely issue MADV_DONTNEED across multiple VMAs.
> >
> > Tangquan’s testing shows that over 99.5% of memory reclaimed by Android
> > benefits from this per-VMA lock optimization. After extended runtime,
> > 217,735 madvise calls from HeapTaskDaemon used the per-VMA path, while
> > only 1,231 fell back to mmap_lock.
> >
> > To simplify handling, the implementation falls back to the standard
> > mmap_lock if userfaultfd is enabled on the VMA, avoiding the complexity of
> > userfaultfd_remove().
> >
> > Many thanks to Lorenzo's work[1] on:
> > "Refactor the madvise() code to retain state about the locking mode
> > utilised for traversing VMAs.
> >
> > Then use this mechanism to permit VMA locking to be done later in the
> > madvise() logic and also to allow altering of the locking mode to permit
> > falling back to an mmap read lock if required."
> >
> > One important point, as pointed out by Jann[2], is that
> > untagged_addr_remote() requires holding mmap_lock. This is because
> > address tagging on x86 and RISC-V is quite complex.
> >
> > Until untagged_addr_remote() becomes atomic—which seems unlikely in
> > the near future—we cannot support per-VMA locks for remote processes.
> > So for now, only local processes are supported.
> >
> > Link: https://lore.kernel.org/all/0b96ce61-a52c-4036-b5b6-5c50783db51f@lucifer.local/ [1]
> > Link: https://lore.kernel.org/all/CAG48ez11zi-1jicHUZtLhyoNPGGVB+ROeAJCUw48bsjk4bbEkA@mail.gmail.com/ [2]
> > Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
> > Cc: "Liam R. Howlett" <Liam.Howlett@...cle.com>
> > Cc: David Hildenbrand <david@...hat.com>
> > Cc: Vlastimil Babka <vbabka@...e.cz>
> > Cc: Jann Horn <jannh@...gle.com>
> > Cc: Suren Baghdasaryan <surenb@...gle.com>
> > Cc: Lokesh Gidra <lokeshgidra@...gle.com>
> > Cc: Tangquan Zheng <zhengtangquan@...o.com>
> > Cc: Qi Zheng <zhengqi.arch@...edance.com>
> > Signed-off-by: Barry Song <v-songbaohua@...o.com>
> > ---
> > -v4:
> > * collect Lorenzo's RB;
> > * use visit() for per-vma path
> >
> > mm/madvise.c | 195 ++++++++++++++++++++++++++++++++++++++-------------
> > 1 file changed, 147 insertions(+), 48 deletions(-)
> >
> > diff --git a/mm/madvise.c b/mm/madvise.c
> > index 56d9ca2557b9..8382614b71d1 100644
> > --- a/mm/madvise.c
> > +++ b/mm/madvise.c
> > @@ -48,38 +48,19 @@ struct madvise_walk_private {
> > bool pageout;
> > };
> >
> > +enum madvise_lock_mode {
> > + MADVISE_NO_LOCK,
> > + MADVISE_MMAP_READ_LOCK,
> > + MADVISE_MMAP_WRITE_LOCK,
> > + MADVISE_VMA_READ_LOCK,
> > +};
> > +
> > struct madvise_behavior {
> > int behavior;
> > struct mmu_gather *tlb;
> > + enum madvise_lock_mode lock_mode;
> > };
> >
> > -/*
> > - * Any behaviour which results in changes to the vma->vm_flags needs to
> > - * take mmap_lock for writing. Others, which simply traverse vmas, need
> > - * to only take it for reading.
> > - */
> > -static int madvise_need_mmap_write(int behavior)
> > -{
> > - switch (behavior) {
> > - case MADV_REMOVE:
> > - case MADV_WILLNEED:
> > - case MADV_DONTNEED:
> > - case MADV_DONTNEED_LOCKED:
> > - case MADV_COLD:
> > - case MADV_PAGEOUT:
> > - case MADV_FREE:
> > - case MADV_POPULATE_READ:
> > - case MADV_POPULATE_WRITE:
> > - case MADV_COLLAPSE:
> > - case MADV_GUARD_INSTALL:
> > - case MADV_GUARD_REMOVE:
> > - return 0;
> > - default:
> > - /* be safe, default to 1. list exceptions explicitly */
> > - return 1;
> > - }
> > -}
> > -
> > #ifdef CONFIG_ANON_VMA_NAME
> > struct anon_vma_name *anon_vma_name_alloc(const char *name)
> > {
> > @@ -1486,6 +1467,44 @@ static bool process_madvise_remote_valid(int behavior)
> > }
> > }
> >
> > +/*
> > + * Try to acquire a VMA read lock if possible.
> > + *
> > + * We only support this lock over a single VMA, which the input range must
> > + * span either partially or fully.
> > + *
> > + * This function always returns with an appropriate lock held. If a VMA read
> > + * lock could be acquired, we return the locked VMA.
> > + *
> > + * If a VMA read lock could not be acquired, we return NULL and expect caller to
> > + * fallback to mmap lock behaviour.
> > + */
> > +static struct vm_area_struct *try_vma_read_lock(struct mm_struct *mm,
> > + struct madvise_behavior *madv_behavior,
> > + unsigned long start, unsigned long end)
> > +{
> > + struct vm_area_struct *vma;
> > +
> > + vma = lock_vma_under_rcu(mm, start);
> > + if (!vma)
> > + goto take_mmap_read_lock;
> > + /*
> > + * Must span only a single VMA; uffd and remote processes are
> > + * unsupported.
> > + */
> > + if (end > vma->vm_end || current->mm != mm ||
> > + userfaultfd_armed(vma)) {
> > + vma_end_read(vma);
> > + goto take_mmap_read_lock;
> > + }
> > + return vma;
> > +
> > +take_mmap_read_lock:
> > + mmap_read_lock(mm);
> > + madv_behavior->lock_mode = MADVISE_MMAP_READ_LOCK;
> > + return NULL;
> > +}
> > +
> > /*
> > * Walk the vmas in range [start,end), and call the visit function on each one.
> > * The visit function will get start and end parameters that cover the overlap
> > @@ -1496,7 +1515,8 @@ static bool process_madvise_remote_valid(int behavior)
> > */
> > static
> > int madvise_walk_vmas(struct mm_struct *mm, unsigned long start,
> > - unsigned long end, void *arg,
> > + unsigned long end, struct madvise_behavior *madv_behavior,
> > + void *arg,
> > int (*visit)(struct vm_area_struct *vma,
> > struct vm_area_struct **prev, unsigned long start,
> > unsigned long end, void *arg))
> > @@ -1505,6 +1525,20 @@ int madvise_walk_vmas(struct mm_struct *mm, unsigned long start,
> > struct vm_area_struct *prev;
> > unsigned long tmp;
> > int unmapped_error = 0;
> > + int error;
> > +
> > + /*
> > + * If VMA read lock is supported, apply madvise to a single VMA
> > + * tentatively, avoiding walking VMAs.
> > + */
> > + if (madv_behavior && madv_behavior->lock_mode == MADVISE_VMA_READ_LOCK) {
> > + vma = try_vma_read_lock(mm, madv_behavior, start, end);
> > + if (vma) {
> > + error = visit(vma, &prev, start, end, arg);
> > + vma_end_read(vma);
> > + return error;
> > + }
> > + }
> >
> > /*
> > * If the interval [start,end) covers some unmapped address
> > @@ -1516,8 +1550,6 @@ int madvise_walk_vmas(struct mm_struct *mm, unsigned long start,
> > prev = vma;
> >
> > for (;;) {
> > - int error;
> > -
> > /* Still start < end. */
> > if (!vma)
> > return -ENOMEM;
> > @@ -1598,34 +1630,86 @@ int madvise_set_anon_name(struct mm_struct *mm, unsigned long start,
> > if (end == start)
> > return 0;
> >
> > - return madvise_walk_vmas(mm, start, end, anon_name,
> > + return madvise_walk_vmas(mm, start, end, NULL, anon_name,
> > madvise_vma_anon_name);
> > }
> > #endif /* CONFIG_ANON_VMA_NAME */
> >
> > -static int madvise_lock(struct mm_struct *mm, int behavior)
> > +
> > +/*
> > + * Any behaviour which results in changes to the vma->vm_flags needs to
> > + * take mmap_lock for writing. Others, which simply traverse vmas, need
> > + * to only take it for reading.
> > + */
> > +static enum madvise_lock_mode get_lock_mode(struct madvise_behavior *madv_behavior)
> > {
> > + int behavior = madv_behavior->behavior;
> > +
> > if (is_memory_failure(behavior))
> > - return 0;
> > + return MADVISE_NO_LOCK;
> >
> > - if (madvise_need_mmap_write(behavior)) {
> > + switch (behavior) {
> > + case MADV_REMOVE:
> > + case MADV_WILLNEED:
> > + case MADV_COLD:
> > + case MADV_PAGEOUT:
> > + case MADV_FREE:
> > + case MADV_POPULATE_READ:
> > + case MADV_POPULATE_WRITE:
> > + case MADV_COLLAPSE:
> > + case MADV_GUARD_INSTALL:
> > + case MADV_GUARD_REMOVE:
> > + return MADVISE_MMAP_READ_LOCK;
> > + case MADV_DONTNEED:
> > + case MADV_DONTNEED_LOCKED:
> > + return MADVISE_VMA_READ_LOCK;
> > + default:
> > + return MADVISE_MMAP_WRITE_LOCK;
> > + }
> > +}
> > +
> > +static int madvise_lock(struct mm_struct *mm,
> > + struct madvise_behavior *madv_behavior)
> > +{
> > + enum madvise_lock_mode lock_mode = get_lock_mode(madv_behavior);
> > +
> > + switch (lock_mode) {
> > + case MADVISE_NO_LOCK:
> > + break;
> > + case MADVISE_MMAP_WRITE_LOCK:
> > if (mmap_write_lock_killable(mm))
> > return -EINTR;
> > - } else {
> > + break;
> > + case MADVISE_MMAP_READ_LOCK:
> > mmap_read_lock(mm);
> > + break;
> > + case MADVISE_VMA_READ_LOCK:
> > + /* We will acquire the lock per-VMA in madvise_walk_vmas(). */
> > + break;
> > }
> > +
> > + madv_behavior->lock_mode = lock_mode;
> > return 0;
> > }
> >
> > -static void madvise_unlock(struct mm_struct *mm, int behavior)
> > +static void madvise_unlock(struct mm_struct *mm,
> > + struct madvise_behavior *madv_behavior)
> > {
> > - if (is_memory_failure(behavior))
> > + switch (madv_behavior->lock_mode) {
> > + case MADVISE_NO_LOCK:
> > return;
> > -
> > - if (madvise_need_mmap_write(behavior))
> > + case MADVISE_MMAP_WRITE_LOCK:
> > mmap_write_unlock(mm);
> > - else
> > + break;
> > + case MADVISE_MMAP_READ_LOCK:
> > mmap_read_unlock(mm);
> > + break;
> > + case MADVISE_VMA_READ_LOCK:
> > + /* We will drop the lock per-VMA in madvise_walk_vmas(). */
> > + break;
> > + }
> > +
> > + madv_behavior->lock_mode = MADVISE_NO_LOCK;
> > }
> >
> > static bool madvise_batch_tlb_flush(int behavior)
> > @@ -1710,6 +1794,21 @@ static bool is_madvise_populate(int behavior)
> > }
> > }
> >
> > +/*
> > + * untagged_addr_remote() assumes mmap_lock is already held. On
> > + * architectures like x86 and RISC-V, tagging is tricky because each
> > + * mm may have a different tagging mask. However, we might only hold
> > + * the per-VMA lock (currently only local processes are supported),
> > + * so untagged_addr is used to avoid the mmap_lock assertion for
> > + * local processes.
> > + */
> > +static inline unsigned long get_untagged_addr(struct mm_struct *mm,
> > + unsigned long start)
> > +{
> > + return current->mm == mm ? untagged_addr(start) :
> > + untagged_addr_remote(mm, start);
> > +}
> > +
> > static int madvise_do_behavior(struct mm_struct *mm,
> > unsigned long start, size_t len_in,
> > struct madvise_behavior *madv_behavior)
> > @@ -1721,7 +1820,7 @@ static int madvise_do_behavior(struct mm_struct *mm,
> >
> > if (is_memory_failure(behavior))
> > return madvise_inject_error(behavior, start, start + len_in);
> > - start = untagged_addr_remote(mm, start);
> > + start = get_untagged_addr(mm, start);
> > end = start + PAGE_ALIGN(len_in);
> >
> > blk_start_plug(&plug);
> > @@ -1729,7 +1828,7 @@ static int madvise_do_behavior(struct mm_struct *mm,
> > error = madvise_populate(mm, start, end, behavior);
> > else
> > error = madvise_walk_vmas(mm, start, end, madv_behavior,
> > - madvise_vma_behavior);
> > + madv_behavior, madvise_vma_behavior);
> > blk_finish_plug(&plug);
> > return error;
> > }
> > @@ -1817,13 +1916,13 @@ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh
> >
> > if (madvise_should_skip(start, len_in, behavior, &error))
> > return error;
> > - error = madvise_lock(mm, behavior);
> > + error = madvise_lock(mm, &madv_behavior);
> > if (error)
> > return error;
> > madvise_init_tlb(&madv_behavior, mm);
> > error = madvise_do_behavior(mm, start, len_in, &madv_behavior);
> > madvise_finish_tlb(&madv_behavior);
> > - madvise_unlock(mm, behavior);
> > + madvise_unlock(mm, &madv_behavior);
> >
> > return error;
> > }
> > @@ -1847,7 +1946,7 @@ static ssize_t vector_madvise(struct mm_struct *mm, struct iov_iter *iter,
> >
> > total_len = iov_iter_count(iter);
> >
> > - ret = madvise_lock(mm, behavior);
> > + ret = madvise_lock(mm, &madv_behavior);
> > if (ret)
> > return ret;
> > madvise_init_tlb(&madv_behavior, mm);
> > @@ -1880,8 +1979,8 @@ static ssize_t vector_madvise(struct mm_struct *mm, struct iov_iter *iter,
> >
> > /* Drop and reacquire lock to unwind race. */
> > madvise_finish_tlb(&madv_behavior);
> > - madvise_unlock(mm, behavior);
> > - ret = madvise_lock(mm, behavior);
> > + madvise_unlock(mm, &madv_behavior);
> > + ret = madvise_lock(mm, &madv_behavior);
> > if (ret)
> > goto out;
> > madvise_init_tlb(&madv_behavior, mm);
> > @@ -1892,7 +1991,7 @@ static ssize_t vector_madvise(struct mm_struct *mm, struct iov_iter *iter,
> > iov_iter_advance(iter, iter_iov_len(iter));
> > }
> > madvise_finish_tlb(&madv_behavior);
> > - madvise_unlock(mm, behavior);
> > + madvise_unlock(mm, &madv_behavior);
> >
> > out:
> > ret = (total_len - iov_iter_count(iter)) ? : ret;
> > --
> > 2.39.3 (Apple Git-146)
> >
>
> ----8<----
> From 1ffcaea75ebdaffe15805386f6d7733883d265a5 Mon Sep 17 00:00:00 2001
> From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
> Date: Tue, 17 Jun 2025 14:35:13 +0100
> Subject: [PATCH] mm/madvise: avoid any chance of uninitialised pointer deref
>
> If we were to extend madvise() to support more operations under VMA lock,
> we could potentially dereference prev to uninitialised state in
> madvise_update_vma().
>
> Avoid this by explicitly setting prev to vma before invoking the visit()
> function.
>
> This has no impact on behaviour, as all visitors compatible with a VMA lock
> do not require prev to be set to the previous VMA and at any rate we only
> examine a single VMA in VMA lock mode.
>
> Reported-by: Lance Yang <ioworker0@...il.com>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
> ---
> mm/madvise.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/mm/madvise.c b/mm/madvise.c
> index efe5d64e1175..0970623a0e98 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -1333,6 +1333,8 @@ static int madvise_vma_behavior(struct vm_area_struct *vma,
> return madvise_guard_remove(vma, prev, start, end);
> }
>
> + /* We cannot provide prev in this lock mode. */
> + VM_WARN_ON_ONCE(arg->lock_mode == MADVISE_VMA_READ_LOCK);
Thanks, Lorenzo.
Do we even reach this point for MADVISE_MMAP_READ_LOCK cases?
madvise_update_vma() attempts to merge or split VMAs—wouldn't that be
a scenario that requires a write lock?
The prerequisite for using a VMA read lock is that the operation must
be safe under an mmap read lock as well.
> anon_name = anon_vma_name(vma);
> anon_vma_name_get(anon_name);
> error = madvise_update_vma(vma, prev, start, end, new_flags,
> @@ -1549,6 +1551,7 @@ int madvise_walk_vmas(struct mm_struct *mm, unsigned long start,
> if (madv_behavior && madv_behavior->lock_mode == MADVISE_VMA_READ_LOCK) {
> vma = try_vma_read_lock(mm, madv_behavior, start, end);
> if (vma) {
> + prev = vma;
> error = visit(vma, &prev, start, end, arg);
> vma_end_read(vma);
> return error;
> --
> 2.49.0
Thanks
Barry
Powered by blists - more mailing lists