[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250826065848.346066-2-harry.yoo@oracle.com>
Date: Tue, 26 Aug 2025 15:58:48 +0900
From: Harry Yoo <harry.yoo@...cle.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Suren Baghdasaryan <surenb@...gle.com>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
David Hildenbrand <david@...hat.com>, Kees Cook <kees@...nel.org>
Cc: Vlastimil Babka <vbabka@...e.cz>, Shakeel Butt <shakeel.butt@...ux.dev>,
Mike Rapoport <rppt@...nel.org>, Michal Hocko <mhocko@...e.com>,
Jonathan Corbet <corbet@....net>, Jann Horn <jannh@...gle.com>,
Pedro Falcato <pfalcato@...e.de>, Rik van Riel <riel@...riel.com>,
linux-mm@...ck.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, Harry Yoo <harry.yoo@...cle.com>
Subject: [PATCH V1 2/2] mm: document when rmap locks can be skipped when setting need_rmap_locks
While move_ptes() explains when rmap locks can be skipped, when reading
the code setting pmc.need_rmap_locks it is not immediately obvious when
need_rmap_locks can be false. Add a brief explanation in copy_vma() and
relocate_vma_down(), and add a pointer to the comment in move_ptes().
Meanwhile, fix and improve the comment in move_ptes().
Signed-off-by: Harry Yoo <harry.yoo@...cle.com>
---
mm/mremap.c | 4 +++-
mm/vma.c | 7 +++++++
mm/vma_exec.c | 5 +++++
3 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/mm/mremap.c b/mm/mremap.c
index e618a706aff5..86adb872bea0 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -218,8 +218,10 @@ static int move_ptes(struct pagetable_move_control *pmc,
* When need_rmap_locks is false, we use other ways to avoid
* such races:
*
- * - During exec() shift_arg_pages(), we use a specially tagged vma
+ * - During exec() relocate_vma_down(), we use a specially tagged vma
* which rmap call sites look for using vma_is_temporary_stack().
+ * Folios mapped in the temporary stack vma cannot be migrated until
+ * the relocation is complete.
*
* - During mremap(), new_vma is often known to be placed after vma
* in rmap traversal order. This ensures rmap will always observe
diff --git a/mm/vma.c b/mm/vma.c
index 3b12c7579831..3da49f79e9ba 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -1842,6 +1842,11 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
vmg.next = vma_iter_next_rewind(&vmi, NULL);
new_vma = vma_merge_new_range(&vmg);
+ /*
+ * rmap locks can be skipped as long as new_vma is traversed
+ * after vma during rmap walk (new_vma->vm_pgoff >= vma->vm_pgoff).
+ * See the comment in move_ptes().
+ */
if (new_vma) {
/*
* Source vma may have been merged into new_vma
@@ -1879,6 +1884,8 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
new_vma->vm_ops->open(new_vma);
if (vma_link(mm, new_vma))
goto out_vma_link;
+
+ /* new_vma->pg_off is always >= vma->pg_off if not merged */
*need_rmap_locks = false;
}
return new_vma;
diff --git a/mm/vma_exec.c b/mm/vma_exec.c
index 922ee51747a6..a895dd39ac46 100644
--- a/mm/vma_exec.c
+++ b/mm/vma_exec.c
@@ -63,6 +63,11 @@ int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift)
* process cleanup to remove whatever mess we made.
*/
pmc.for_stack = true;
+ /*
+ * pmc.need_rmap_locks is false since rmap locks can be safely skipped
+ * because migration is disabled for this vma during relocation.
+ * See the comment in move_ptes().
+ */
if (length != move_page_tables(&pmc))
return -ENOMEM;
--
2.43.0
Powered by blists - more mailing lists