[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250624152654.38145-1-ioworker0@gmail.com>
Date: Tue, 24 Jun 2025 23:26:54 +0800
From: Lance Yang <ioworker0@...il.com>
To: david@...hat.com
Cc: 21cnbao@...il.com,
akpm@...ux-foundation.org,
baolin.wang@...ux.alibaba.com,
chrisl@...nel.org,
ioworker0@...il.com,
kasong@...cent.com,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
linux-riscv@...ts.infradead.org,
lorenzo.stoakes@...cle.com,
ryan.roberts@....com,
v-songbaohua@...o.com,
x86@...nel.org,
ying.huang@...el.com,
zhengtangquan@...o.com
Subject: Re: [PATCH v4 3/4] mm: Support batched unmap for lazyfree large folios during reclamation
On 2025/6/24 20:55, David Hildenbrand wrote:
> On 14.02.25 10:30, Barry Song wrote:
>> From: Barry Song <v-songbaohua@...o.com>
[...]
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 89e51a7a9509..8786704bd466 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1781,6 +1781,25 @@ void folio_remove_rmap_pud(struct folio *folio,
>> struct page *page,
>> #endif
>> }
>> +/* We support batch unmapping of PTEs for lazyfree large folios */
>> +static inline bool can_batch_unmap_folio_ptes(unsigned long addr,
>> + struct folio *folio, pte_t *ptep)
>> +{
>> + const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
>> + int max_nr = folio_nr_pages(folio);
>
> Let's assume we have the first page of a folio mapped at the last page
> table entry in our page table.
Good point. I'm curious if it is something we've seen in practice ;)
>
> What prevents folio_pte_batch() from reading outside the page table?
Assuming such a scenario is possible, to prevent any chance of an
out-of-bounds read, how about this change:
diff --git a/mm/rmap.c b/mm/rmap.c
index fb63d9256f09..9aeae811a38b 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1852,6 +1852,25 @@ static inline bool can_batch_unmap_folio_ptes(unsigned long addr,
const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
int max_nr = folio_nr_pages(folio);
pte_t pte = ptep_get(ptep);
+ unsigned long end_addr;
+
+ /*
+ * To batch unmap, the entire folio's PTEs must be contiguous
+ * and mapped within the same PTE page table, which corresponds to
+ * a single PMD entry. Before calling folio_pte_batch(), which does
+ * not perform boundary checks itself, we must verify that the
+ * address range covered by the folio does not cross a PMD boundary.
+ */
+ end_addr = addr + (max_nr * PAGE_SIZE) - 1;
+
+ /*
+ * A fast way to check for a PMD boundary cross is to align both
+ * the start and end addresses to the PMD boundary and see if they
+ * are different. If they are, the range spans across at least two
+ * different PMD-managed regions.
+ */
+ if ((addr & PMD_MASK) != (end_addr & PMD_MASK))
+ return false;
if (!folio_test_anon(folio) || folio_test_swapbacked(folio))
return false;
--
Thanks,
Lance
Powered by blists - more mailing lists