[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250527044145.13153-1-21cnbao@gmail.com>
Date: Tue, 27 May 2025 16:41:45 +1200
From: Barry Song <21cnbao@...il.com>
To: akpm@...ux-foundation.org,
linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org,
Barry Song <v-songbaohua@...o.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
David Hildenbrand <david@...hat.com>,
Vlastimil Babka <vbabka@...e.cz>,
Jann Horn <jannh@...gle.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Lokesh Gidra <lokeshgidra@...gle.com>,
Tangquan Zheng <zhengtangquan@...o.com>
Subject: [PATCH RFC] mm: use per_vma lock for MADV_DONTNEED
From: Barry Song <v-songbaohua@...o.com>
Certain madvise operations, especially MADV_DONTNEED, occur far more
frequently than other madvise options, particularly in native and Java
heaps for dynamic memory management.
Currently, the mmap_lock is always held during these operations, even when
unnecessary. This causes lock contention and can lead to severe priority
inversion, where low-priority threads—such as Android's HeapTaskDaemon—
hold the lock and block higher-priority threads.
This patch enables the use of per-VMA locks when the advised range lies
entirely within a single VMA, avoiding the need for full VMA traversal. In
practice, userspace heaps rarely issue MADV_DONTNEED across multiple VMAs.
Tangquan’s testing shows that over 99.5% of memory reclaimed by Android
benefits from this per-VMA lock optimization. After extended runtime,
217,735 madvise calls from HeapTaskDaemon used the per-VMA path, while
only 1,231 fell back to mmap_lock.
To simplify handling, the implementation falls back to the standard
mmap_lock if userfaultfd is enabled on the VMA, avoiding the complexity of
userfaultfd_remove().
Cc: "Liam R. Howlett" <Liam.Howlett@...cle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: David Hildenbrand <david@...hat.com>
Cc: Vlastimil Babka <vbabka@...e.cz>
Cc: Jann Horn <jannh@...gle.com>
Cc: Suren Baghdasaryan <surenb@...gle.com>
Cc: Lokesh Gidra <lokeshgidra@...gle.com>
Cc: Tangquan Zheng <zhengtangquan@...o.com>
Signed-off-by: Barry Song <v-songbaohua@...o.com>
---
mm/madvise.c | 34 ++++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)
diff --git a/mm/madvise.c b/mm/madvise.c
index 8433ac9b27e0..da016a1d0434 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -1817,6 +1817,39 @@ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh
if (madvise_should_skip(start, len_in, behavior, &error))
return error;
+
+ /*
+ * MADV_DONTNEED is commonly used with userspace heaps and most often
+ * affects a single VMA. In these cases, we can use per-VMA locks to
+ * reduce contention on the mmap_lock.
+ */
+ if (behavior == MADV_DONTNEED || behavior == MADV_DONTNEED_LOCKED) {
+ struct vm_area_struct *prev, *vma;
+ unsigned long untagged_start, end;
+
+ untagged_start = untagged_addr(start);
+ end = untagged_start + len_in;
+ vma = lock_vma_under_rcu(mm, untagged_start);
+ if (!vma)
+ goto lock;
+ if (end > vma->vm_end || userfaultfd_armed(vma)) {
+ vma_end_read(vma);
+ goto lock;
+ }
+ if (unlikely(!can_modify_vma_madv(vma, behavior))) {
+ error = -EPERM;
+ vma_end_read(vma);
+ goto out;
+ }
+ madvise_init_tlb(&madv_behavior, mm);
+ error = madvise_dontneed_free(vma, &prev, untagged_start,
+ end, &madv_behavior);
+ madvise_finish_tlb(&madv_behavior);
+ vma_end_read(vma);
+ goto out;
+ }
+
+lock:
error = madvise_lock(mm, behavior);
if (error)
return error;
@@ -1825,6 +1858,7 @@ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh
madvise_finish_tlb(&madv_behavior);
madvise_unlock(mm, behavior);
+out:
return error;
}
--
2.39.3 (Apple Git-146)
Powered by blists - more mailing lists