[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c95215b2-6ec5-4efb-a73b-7be92cda1c83@redhat.com>
Date: Tue, 27 Feb 2024 10:14:43 +0100
From: David Hildenbrand <david@...hat.com>
To: Ryan Roberts <ryan.roberts@....com>, Barry Song <21cnbao@...il.com>,
akpm@...ux-foundation.org, linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, Barry Song <v-songbaohua@...o.com>,
Lance Yang <ioworker0@...il.com>, Yin Fengwei <fengwei.yin@...el.com>
Subject: Re: [PATCH] mm: export folio_pte_batch as a couple of modules might
need it
On 27.02.24 10:07, Ryan Roberts wrote:
> On 27/02/2024 02:40, Barry Song wrote:
>> From: Barry Song <v-songbaohua@...o.com>
>>
>> madvise and some others might need folio_pte_batch to check if a range
>> of PTEs are completely mapped to a large folio with contiguous physcial
>> addresses. Let's export it for others to use.
>>
>> Cc: Lance Yang <ioworker0@...il.com>
>> Cc: Ryan Roberts <ryan.roberts@....com>
>> Cc: David Hildenbrand <david@...hat.com>
>> Cc: Yin Fengwei <fengwei.yin@...el.com>
>> Signed-off-by: Barry Song <v-songbaohua@...o.com>
>> ---
>> -v1:
>> at least two jobs madv_free and madv_pageout depend on it. To avoid
>> conflicts and dependencies, after discussing with Lance, we prefer
>> this one can land earlier.
>
> I think this will also ultimately be useful for mprotect too, though I haven't
> looked at it properly yet.
>
Yes, I think we briefly discussed that.
>>
>> mm/internal.h | 13 +++++++++++++
>> mm/memory.c | 11 +----------
>> 2 files changed, 14 insertions(+), 10 deletions(-)
>>
>> diff --git a/mm/internal.h b/mm/internal.h
>> index 13b59d384845..8e2bc304f671 100644
>> --- a/mm/internal.h
>> +++ b/mm/internal.h
>> @@ -83,6 +83,19 @@ static inline void *folio_raw_mapping(struct folio *folio)
>> return (void *)(mapping & ~PAGE_MAPPING_FLAGS);
>> }
>>
>> +/* Flags for folio_pte_batch(). */
>> +typedef int __bitwise fpb_t;
>> +
>> +/* Compare PTEs after pte_mkclean(), ignoring the dirty bit. */
>> +#define FPB_IGNORE_DIRTY ((__force fpb_t)BIT(0))
>> +
>> +/* Compare PTEs after pte_clear_soft_dirty(), ignoring the soft-dirty bit. */
>> +#define FPB_IGNORE_SOFT_DIRTY ((__force fpb_t)BIT(1))
>> +
>> +extern int folio_pte_batch(struct folio *folio, unsigned long addr,
>> + pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
>> + bool *any_writable);
>> +
>> void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio,
>> int nr_throttled);
>> static inline void acct_reclaim_writeback(struct folio *folio)
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 1c45b6a42a1b..319b3be05e75 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -953,15 +953,6 @@ static __always_inline void __copy_present_ptes(struct vm_area_struct *dst_vma,
>> set_ptes(dst_vma->vm_mm, addr, dst_pte, pte, nr);
>> }
>>
>> -/* Flags for folio_pte_batch(). */
>> -typedef int __bitwise fpb_t;
>> -
>> -/* Compare PTEs after pte_mkclean(), ignoring the dirty bit. */
>> -#define FPB_IGNORE_DIRTY ((__force fpb_t)BIT(0))
>> -
>> -/* Compare PTEs after pte_clear_soft_dirty(), ignoring the soft-dirty bit. */
>> -#define FPB_IGNORE_SOFT_DIRTY ((__force fpb_t)BIT(1))
>> -
>> static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
>> {
>> if (flags & FPB_IGNORE_DIRTY)
>> @@ -982,7 +973,7 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
>> * If "any_writable" is set, it will indicate if any other PTE besides the
>> * first (given) PTE is writable.
>> */
>
> David was talking in Lance's patch thread, about improving the docs for this
> function now that its exported. Might be worth syncing on that.
Here is my take:
Signed-off-by: David Hildenbrand <david@...hat.com>
---
mm/memory.c | 22 ++++++++++++++++++----
1 file changed, 18 insertions(+), 4 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index d0b855a1837a8..098356b8805ae 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -971,16 +971,28 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
return pte_wrprotect(pte_mkold(pte));
}
-/*
+/**
+ * folio_pte_batch - detect a PTE batch for a large folio
+ * @folio: The large folio to detect a PTE batch for.
+ * @addr: The user virtual address the first page is mapped at.
+ * @start_ptep: Page table pointer for the first entry.
+ * @pte: Page table entry for the first page.
+ * @max_nr: The maximum number of table entries to consider.
+ * @flags: Flags to modify the PTE batch semantics.
+ * @any_writable: Optional pointer to indicate whether any entry except the
+ * first one is writable.
+ *
* Detect a PTE batch: consecutive (present) PTEs that map consecutive
- * pages of the same folio.
+ * pages of the same large folio.
*
* All PTEs inside a PTE batch have the same PTE bits set, excluding the PFN,
* the accessed bit, writable bit, dirty bit (with FPB_IGNORE_DIRTY) and
* soft-dirty bit (with FPB_IGNORE_SOFT_DIRTY).
*
- * If "any_writable" is set, it will indicate if any other PTE besides the
- * first (given) PTE is writable.
+ * start_ptep must map any page of the folio. max_nr must be at least one and
+ * must be limited by the caller so scanning cannot exceed a single page table.
+ *
+ * Return: the number of table entries in the batch.
*/
static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
@@ -996,6 +1008,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
*any_writable = false;
VM_WARN_ON_FOLIO(!pte_present(pte), folio);
+ VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio);
+ VM_WARN_ON_FOLIO(page_folio(pfn_to_page(pte_pfn(pte))) != folio, folio);
nr = pte_batch_hint(start_ptep, pte);
expected_pte = __pte_batch_clear_ignored(pte_advance_pfn(pte, nr), flags);
--
2.43.2
>
>> -static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>> +int folio_pte_batch(struct folio *folio, unsigned long addr,
>
> fork() is very performance sensitive. Is there a risk we are regressing
> performance by making this out-of-line? Although its in the same compilation
> unit so the compiler may well inline it anyway?
Easy to verify by looking at the generated asm I guess?
>
> Either way, perhaps we are better off making it inline in the header? That would
> avoid needing to rerun David's micro-benchmarks for fork() and munmap().
That way, the compiler can most certainly better optimize it also outside of mm/memory.c
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists