[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <93385672-927f-4de5-a158-fc3fc0424be0@lucifer.local>
Date: Tue, 27 May 2025 10:20:04 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: Barry Song <21cnbao@...il.com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Barry Song <v-songbaohua@...o.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
David Hildenbrand <david@...hat.com>, Vlastimil Babka <vbabka@...e.cz>,
Jann Horn <jannh@...gle.com>, Suren Baghdasaryan <surenb@...gle.com>,
Lokesh Gidra <lokeshgidra@...gle.com>,
Tangquan Zheng <zhengtangquan@...o.com>
Subject: Re: [PATCH RFC] mm: use per_vma lock for MADV_DONTNEED
Overall - thanks for this, and I'm not sure why we didn't think of doing
this sooner :P this seems like a super valid thing to try to use the vma
lock with.
I see you've cc'd Suren who has the most expertise in this and can
hopefully audit this and ensure all is good, but from the process address
doc (see below), I think we're good to just have the VMA stabilised for a
zap.
On Tue, May 27, 2025 at 04:41:45PM +1200, Barry Song wrote:
> From: Barry Song <v-songbaohua@...o.com>
>
> Certain madvise operations, especially MADV_DONTNEED, occur far more
> frequently than other madvise options, particularly in native and Java
> heaps for dynamic memory management.
Ack yeah, I have gathered that this is the case previously.
>
> Currently, the mmap_lock is always held during these operations, even when
> unnecessary. This causes lock contention and can lead to severe priority
> inversion, where low-priority threads—such as Android's HeapTaskDaemon—
> hold the lock and block higher-priority threads.
That's very nasty... we definitely want to eliminate as much mmap_lock
contention as possible.
>
> This patch enables the use of per-VMA locks when the advised range lies
> entirely within a single VMA, avoiding the need for full VMA traversal. In
> practice, userspace heaps rarely issue MADV_DONTNEED across multiple VMAs.
Yeah this single VMA requirement is obviously absolutely key.
As per my docs [0] actually, for zapping a single VMA, 'The VMA need only be
kept stable for this operation.' (I had to look this up to remind myself :P)
[0]: https://kernel.org/doc/html/latest/mm/process_addrs.html
So we actually... should be good here, locking-wise.
>
> Tangquan’s testing shows that over 99.5% of memory reclaimed by Android
> benefits from this per-VMA lock optimization. After extended runtime,
> 217,735 madvise calls from HeapTaskDaemon used the per-VMA path, while
> only 1,231 fell back to mmap_lock.
Thanks, this sounds really promising!
I take it then you have as a result, heavily tested this change?
>
> To simplify handling, the implementation falls back to the standard
> mmap_lock if userfaultfd is enabled on the VMA, avoiding the complexity of
> userfaultfd_remove().
Oh GOD do I hate how we implement uffd. Have I ever mentioned that? Well,
let me mention it again...
>
> Cc: "Liam R. Howlett" <Liam.Howlett@...cle.com>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
> Cc: David Hildenbrand <david@...hat.com>
> Cc: Vlastimil Babka <vbabka@...e.cz>
> Cc: Jann Horn <jannh@...gle.com>
> Cc: Suren Baghdasaryan <surenb@...gle.com>
> Cc: Lokesh Gidra <lokeshgidra@...gle.com>
> Cc: Tangquan Zheng <zhengtangquan@...o.com>
> Signed-off-by: Barry Song <v-songbaohua@...o.com>
> ---
> mm/madvise.c | 34 ++++++++++++++++++++++++++++++++++
> 1 file changed, 34 insertions(+)
>
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 8433ac9b27e0..da016a1d0434 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -1817,6 +1817,39 @@ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh
>
> if (madvise_should_skip(start, len_in, behavior, &error))
> return error;
> +
> + /*
> + * MADV_DONTNEED is commonly used with userspace heaps and most often
> + * affects a single VMA. In these cases, we can use per-VMA locks to
> + * reduce contention on the mmap_lock.
> + */
> + if (behavior == MADV_DONTNEED || behavior == MADV_DONTNEED_LOCKED) {
So firstly doing this here means process_madvise() doesn't get this benefit, and
we're inconsistent between the two which we really want to avoid.
But secondly - we definitely need to find a better way to do this :) this
basically follows the 'ignore the existing approach and throw in an if
(special case) { ... }' pattern that I feel we really need to do all we can
to avoid in the kernel.
This lies the way of uffd, hugetlb, and thus horrors beyond imagining.
I can see why you did this as this is kind of special-cased a bit, and we
already do this kind of thing all over the place but let's try to avoid
this here.
So I suggest:
- Remove any code for this from do_madvise() and thus make it available to
process_madvise() also.
- Try to avoid the special casing here as much as humanly possible :)
- Update madvise_lock()/unlock() to get passed a pointer to struct
madvise_behavior to which we can add a boolean or even better I think -
an enum indicating which lock type was taken (this can simplify
madvise_unlock() also).
- Update madvise_lock() to do all of the checks below, we already
effectively do a switch (behavior) so it's not so crazy to do this. And
you can also do the fallthrough logic there.
- Obviously madvise_unlock() can be updated to do vma_end_read().
> + struct vm_area_struct *prev, *vma;
> + unsigned long untagged_start, end;
> +
> + untagged_start = untagged_addr(start);
> + end = untagged_start + len_in;
> + vma = lock_vma_under_rcu(mm, untagged_start);
> + if (!vma)
> + goto lock;
> + if (end > vma->vm_end || userfaultfd_armed(vma)) {
> + vma_end_read(vma);
> + goto lock;
> + }
> + if (unlikely(!can_modify_vma_madv(vma, behavior))) {
> + error = -EPERM;
> + vma_end_read(vma);
> + goto out;
> + }
> + madvise_init_tlb(&madv_behavior, mm);
> + error = madvise_dontneed_free(vma, &prev, untagged_start,
> + end, &madv_behavior);
> + madvise_finish_tlb(&madv_behavior);
> + vma_end_read(vma);
> + goto out;
> + }
> +
> +lock:
> error = madvise_lock(mm, behavior);
> if (error)
> return error;
> @@ -1825,6 +1858,7 @@ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh
> madvise_finish_tlb(&madv_behavior);
> madvise_unlock(mm, behavior);
>
> +out:
> return error;
> }
>
> --
> 2.39.3 (Apple Git-146)
>
Cheers, Lorenzo
Powered by blists - more mailing lists