[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250919124036.455709-2-kirill@shutemov.name>
Date: Fri, 19 Sep 2025 13:40:32 +0100
From: Kiryl Shutsemau <kirill@...temov.name>
To: Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Hugh Dickins <hughd@...gle.com>,
Matthew Wilcox <willy@...radead.org>
Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Mike Rapoport <rppt@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Michal Hocko <mhocko@...e.com>,
Rik van Riel <riel@...riel.com>,
Harry Yoo <harry.yoo@...cle.com>,
Johannes Weiner <hannes@...xchg.org>,
Shakeel Butt <shakeel.butt@...ux.dev>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Kiryl Shutsemau <kas@...nel.org>
Subject: [PATCHv2 1/5] mm/page_vma_mapped: Track if the page is mapped across page table boundary
From: Kiryl Shutsemau <kas@...nel.org>
Add a PVMW_PGTABLE_CROSSSED flag that page_vma_mapped_walk() will set if
the page is mapped across page table boundary. Unlike other PVMW_*
flags, this one is result of page_vma_mapped_walk() and not set by the
caller.
folio_referenced_one() will use it detect if it safe to mlock the folio.
Signed-off-by: Kiryl Shutsemau <kas@...nel.org>
---
include/linux/rmap.h | 5 +++++
mm/page_vma_mapped.c | 1 +
2 files changed, 6 insertions(+)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 6cd020eea37a..04797cea3205 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -928,6 +928,11 @@ struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr,
/* Look for migration entries rather than present PTEs */
#define PVMW_MIGRATION (1 << 1)
+/* Result flags */
+
+/* The page is mapped across page boundary */
+#define PVMW_PGTABLE_CROSSSED (1 << 16)
+
struct page_vma_mapped_walk {
unsigned long pfn;
unsigned long nr_pages;
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index e981a1a292d2..a184b88743c3 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -309,6 +309,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
}
pte_unmap(pvmw->pte);
pvmw->pte = NULL;
+ pvmw->flags |= PVMW_PGTABLE_CROSSSED;
goto restart;
}
pvmw->pte++;
--
2.50.1
Powered by blists - more mailing lists