[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251201185604.210634-10-shivankg@amd.com>
Date: Mon, 1 Dec 2025 18:56:11 +0000
From: Shivank Garg <shivankg@....com>
To: Andrew Morton <akpm@...ux-foundation.org>, David Hildenbrand
<david@...nel.org>, Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
CC: Zi Yan <ziy@...dia.com>, Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>, Nico Pache <npache@...hat.com>,
Ryan Roberts <ryan.roberts@....com>, Dev Jain <dev.jain@....com>, Barry Song
<baohua@...nel.org>, Lance Yang <lance.yang@...ux.dev>, Steven Rostedt
<rostedt@...dmis.org>, Masami Hiramatsu <mhiramat@...nel.org>, "Mathieu
Desnoyers" <mathieu.desnoyers@...icios.com>, Zach O'Keefe
<zokeefe@...gle.com>, <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
<linux-trace-kernel@...r.kernel.org>, <shivankg@....com>, Branden Moore
<Branden.Moore@....com>
Subject: [PATCH V3 2/2] mm/khugepaged: retry with sync writeback for MADV_COLLAPSE
When MADV_COLLAPSE is called on file-backed mappings (e.g., executable
text sections), the pages may still be dirty from recent writes.
collapse_file() will trigger async writeback and fail with
SCAN_PAGE_DIRTY_OR_WRITEBACK (-EAGAIN).
MADV_COLLAPSE is a synchronous operation where userspace expects
immediate results. If the collapse fails due to dirty pages, perform
synchronous writeback on the specific range and retry once.
This avoids spurious failures for freshly written executables while
avoiding unnecessary synchronous I/O for mappings that are already clean.
Reported-by: Branden Moore <Branden.Moore@....com>
Closes: https://lore.kernel.org/all/4e26fe5e-7374-467c-a333-9dd48f85d7cc@amd.com
Fixes: 34488399fa08 ("mm/madvise: add file and shmem support to MADV_COLLAPSE")
Suggested-by: David Hildenbrand <david@...nel.org>
Signed-off-by: Shivank Garg <shivankg@....com>
---
mm/khugepaged.c | 41 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 41 insertions(+)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 219dfa2e523c..7a12e9ef30b4 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -22,6 +22,7 @@
#include <linux/dax.h>
#include <linux/ksm.h>
#include <linux/pgalloc.h>
+#include <linux/backing-dev.h>
#include <asm/tlb.h>
#include "internal.h"
@@ -2787,9 +2788,11 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
hend = end & HPAGE_PMD_MASK;
for (addr = hstart; addr < hend; addr += HPAGE_PMD_SIZE) {
+ bool retried = false;
int result = SCAN_FAIL;
if (!mmap_locked) {
+retry:
cond_resched();
mmap_read_lock(mm);
mmap_locked = true;
@@ -2819,6 +2822,44 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
if (!mmap_locked)
*lock_dropped = true;
+ /*
+ * If the file-backed VMA has dirty pages, the scan triggers
+ * async writeback and returns SCAN_PAGE_DIRTY_OR_WRITEBACK.
+ * Since MADV_COLLAPSE is sync, we force sync writeback and
+ * retry once.
+ */
+ if (result == SCAN_PAGE_DIRTY_OR_WRITEBACK && !retried) {
+ /*
+ * File scan drops the lock. We must re-acquire it to
+ * safely inspect the VMA and hold the file reference.
+ */
+ if (!mmap_locked) {
+ cond_resched();
+ mmap_read_lock(mm);
+ mmap_locked = true;
+ result = hugepage_vma_revalidate(mm, addr, false, &vma, cc);
+ if (result != SCAN_SUCCEED)
+ goto handle_result;
+ }
+
+ if (!vma_is_anonymous(vma) && vma->vm_file &&
+ mapping_can_writeback(vma->vm_file->f_mapping)) {
+ struct file *file = get_file(vma->vm_file);
+ pgoff_t pgoff = linear_page_index(vma, addr);
+ loff_t lstart = (loff_t)pgoff << PAGE_SHIFT;
+ loff_t lend = lstart + HPAGE_PMD_SIZE - 1;
+
+ mmap_read_unlock(mm);
+ mmap_locked = false;
+ *lock_dropped = true;
+ filemap_write_and_wait_range(file->f_mapping, lstart, lend);
+ fput(file);
+ retried = true;
+ goto retry;
+ }
+ }
+
+
handle_result:
switch (result) {
case SCAN_SUCCEED:
--
2.43.0
Powered by blists - more mailing lists