[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250923110310.689126-5-kirill@shutemov.name>
Date: Tue, 23 Sep 2025 12:03:09 +0100
From: Kiryl Shutsemau <kirill@...temov.name>
To: Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Hugh Dickins <hughd@...gle.com>,
Matthew Wilcox <willy@...radead.org>
Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Mike Rapoport <rppt@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Michal Hocko <mhocko@...e.com>,
Rik van Riel <riel@...riel.com>,
Harry Yoo <harry.yoo@...cle.com>,
Johannes Weiner <hannes@...xchg.org>,
Shakeel Butt <shakeel.butt@...ux.dev>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Kiryl Shutsemau <kas@...nel.org>
Subject: [PATCHv3 4/5] mm/filemap: Map entire large folio faultaround
From: Kiryl Shutsemau <kas@...nel.org>
Currently, kernel only maps part of large folio that fits into
start_pgoff/end_pgoff range.
Map entire folio where possible. It will match finish_fault() behaviour
that user hits on cold page cache.
Mapping large folios at once will allow the rmap code to mlock it on
add, as it will recognize that it is fully mapped and mlocking is safe.
Signed-off-by: Kiryl Shutsemau <kas@...nel.org>
---
mm/filemap.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/mm/filemap.c b/mm/filemap.c
index 751838ef05e5..26cae577ba23 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3643,6 +3643,21 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
struct page *page = folio_page(folio, start);
unsigned int count = 0;
pte_t *old_ptep = vmf->pte;
+ unsigned long addr0;
+
+ /*
+ * Map the large folio fully where possible.
+ *
+ * The folio must not cross VMA or page table boundary.
+ */
+ addr0 = addr - start * PAGE_SIZE;
+ if (folio_within_vma(folio, vmf->vma) &&
+ (addr0 & PMD_MASK) == ((addr0 + folio_size(folio) - 1) & PMD_MASK)) {
+ vmf->pte -= start;
+ page -= start;
+ addr = addr0;
+ nr_pages = folio_nr_pages(folio);
+ }
do {
if (PageHWPoison(page + count))
--
2.50.1
Powered by blists - more mailing lists