[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <43ddbdee-0cc2-496b-8ea6-b90a04c64d68@arm.com>
Date: Mon, 30 Jun 2025 17:23:37 +0530
From: Dev Jain <dev.jain@....com>
To: Ryan Roberts <ryan.roberts@....com>, akpm@...ux-foundation.org
Cc: david@...hat.com, willy@...radead.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, catalin.marinas@....com, will@...nel.org,
Liam.Howlett@...cle.com, lorenzo.stoakes@...cle.com, vbabka@...e.cz,
jannh@...gle.com, anshuman.khandual@....com, peterx@...hat.com,
joey.gouly@....com, ioworker0@...il.com, baohua@...nel.org,
kevin.brodsky@....com, quic_zhenhuah@...cinc.com,
christophe.leroy@...roup.eu, yangyicong@...ilicon.com,
linux-arm-kernel@...ts.infradead.org, hughd@...gle.com,
yang@...amperecomputing.com, ziy@...dia.com
Subject: Re: [PATCH v4 3/4] mm: Optimize mprotect() by PTE-batching
On 30/06/25 5:20 pm, Ryan Roberts wrote:
> On 30/06/2025 12:21, Dev Jain wrote:
>> On 30/06/25 4:01 pm, Ryan Roberts wrote:
>>> On 28/06/2025 12:34, Dev Jain wrote:
>>>> Use folio_pte_batch to batch process a large folio. Reuse the folio from
>>>> prot_numa case if possible.
>>>>
>>>> For all cases other than the PageAnonExclusive case, if the case holds true
>>>> for one pte in the batch, one can confirm that that case will hold true for
>>>> other ptes in the batch too; for pte_needs_soft_dirty_wp(), we do not pass
>>>> FPB_IGNORE_SOFT_DIRTY. modify_prot_start_ptes() collects the dirty
>>>> and access bits across the batch, therefore batching across
>>>> pte_dirty(): this is correct since the dirty bit on the PTE really is
>>>> just an indication that the folio got written to, so even if the PTE is
>>>> not actually dirty (but one of the PTEs in the batch is), the wp-fault
>>>> optimization can be made.
>>>>
>>>> The crux now is how to batch around the PageAnonExclusive case; we must
>>>> check the corresponding condition for every single page. Therefore, from
>>>> the large folio batch, we process sub batches of ptes mapping pages with
>>>> the same PageAnonExclusive condition, and process that sub batch, then
>>>> determine and process the next sub batch, and so on. Note that this does
>>>> not cause any extra overhead; if suppose the size of the folio batch
>>>> is 512, then the sub batch processing in total will take 512 iterations,
>>>> which is the same as what we would have done before.
>>>>
>>>> Signed-off-by: Dev Jain <dev.jain@....com>
>>>> ---
>>>> mm/mprotect.c | 143 +++++++++++++++++++++++++++++++++++++++++---------
>>>> 1 file changed, 117 insertions(+), 26 deletions(-)
>>>>
>>>> diff --git a/mm/mprotect.c b/mm/mprotect.c
>>>> index 627b0d67cc4a..28c7ce7728ff 100644
>>>> --- a/mm/mprotect.c
>>>> +++ b/mm/mprotect.c
>>>> @@ -40,35 +40,47 @@
>>>> #include "internal.h"
>>>> -bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
>>>> - pte_t pte)
>>>> -{
>>>> - struct page *page;
>>>> +enum tristate {
>>>> + TRI_FALSE = 0,
>>>> + TRI_TRUE = 1,
>>>> + TRI_MAYBE = -1,
>>>> +};
>>>> +/*
>>>> + * Returns enum tristate indicating whether the pte can be changed to writable.
>>>> + * If TRI_MAYBE is returned, then the folio is anonymous and the user must
>>>> + * additionally check PageAnonExclusive() for every page in the desired range.
>>>> + */
>>>> +static int maybe_change_pte_writable(struct vm_area_struct *vma,
>>>> + unsigned long addr, pte_t pte,
>>>> + struct folio *folio)
>>>> +{
>>>> if (WARN_ON_ONCE(!(vma->vm_flags & VM_WRITE)))
>>>> - return false;
>>>> + return TRI_FALSE;
>>>> /* Don't touch entries that are not even readable. */
>>>> if (pte_protnone(pte))
>>>> - return false;
>>>> + return TRI_FALSE;
>>>> /* Do we need write faults for softdirty tracking? */
>>>> if (pte_needs_soft_dirty_wp(vma, pte))
>>>> - return false;
>>>> + return TRI_FALSE;
>>>> /* Do we need write faults for uffd-wp tracking? */
>>>> if (userfaultfd_pte_wp(vma, pte))
>>>> - return false;
>>>> + return TRI_FALSE;
>>>> if (!(vma->vm_flags & VM_SHARED)) {
>>>> /*
>>>> * Writable MAP_PRIVATE mapping: We can only special-case on
>>>> * exclusive anonymous pages, because we know that our
>>>> * write-fault handler similarly would map them writable without
>>>> - * any additional checks while holding the PT lock.
>>>> + * any additional checks while holding the PT lock. So if the
>>>> + * folio is not anonymous, we know we cannot change pte to
>>>> + * writable. If it is anonymous then the caller must further
>>>> + * check that the page is AnonExclusive().
>>>> */
>>>> - page = vm_normal_page(vma, addr, pte);
>>>> - return page && PageAnon(page) && PageAnonExclusive(page);
>>>> + return (!folio || folio_test_anon(folio)) ? TRI_MAYBE : TRI_FALSE;
>>>> }
>>>> VM_WARN_ON_ONCE(is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte));
>>>> @@ -80,15 +92,61 @@ bool can_change_pte_writable(struct vm_area_struct *vma,
>>>> unsigned long addr,
>>>> * FS was already notified and we can simply mark the PTE writable
>>>> * just like the write-fault handler would do.
>>>> */
>>>> - return pte_dirty(pte);
>>>> + return pte_dirty(pte) ? TRI_TRUE : TRI_FALSE;
>>>> +}
>>>> +
>>>> +/*
>>>> + * Returns the number of pages within the folio, starting from the page
>>>> + * indicated by pgidx and up to pgidx + max_nr, that have the same value of
>>>> + * PageAnonExclusive(). Must only be called for anonymous folios. Value of
>>>> + * PageAnonExclusive() is returned in *exclusive.
>>>> + */
>>>> +static int anon_exclusive_batch(struct folio *folio, int pgidx, int max_nr,
>>>> + bool *exclusive)
>>>> +{
>>>> + struct page *page;
>>>> + int nr = 1;
>>>> +
>>>> + if (!folio) {
>>>> + *exclusive = false;
>>>> + return nr;
>>>> + }
>>>> +
>>>> + page = folio_page(folio, pgidx++);
>>>> + *exclusive = PageAnonExclusive(page);
>>>> + while (nr < max_nr) {
>>>> + page = folio_page(folio, pgidx++);
>>>> + if ((*exclusive) != PageAnonExclusive(page))
>>> nit: brackets not required around *exclusive.
>> Thanks I'll drop it. I have a habit of putting brackets everywhere
>> because debugging operator precedence bugs is a nightmare - finally
>> the time has come to learn operator precedence!
>>
>>>> + break;
>>>> + nr++;
>>>> + }
>>>> +
>>>> + return nr;
>>>> +}
>>>> +
>>>> +bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
>>>> + pte_t pte)
>>>> +{
>>>> + struct page *page;
>>>> + int ret;
>>>> +
>>>> + ret = maybe_change_pte_writable(vma, addr, pte, NULL);
>>>> + if (ret == TRI_MAYBE) {
>>>> + page = vm_normal_page(vma, addr, pte);
>>>> + ret = page && PageAnon(page) && PageAnonExclusive(page);
>>>> + }
>>>> +
>>>> + return ret;
>>>> }
>>>> static int mprotect_folio_pte_batch(struct folio *folio, unsigned long addr,
>>>> - pte_t *ptep, pte_t pte, int max_nr_ptes)
>>>> + pte_t *ptep, pte_t pte, int max_nr_ptes, fpb_t switch_off_flags)
>>>> {
>>>> - const fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
>>>> + fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
>>>> +
>>>> + flags &= ~switch_off_flags;
>>> This is mega confusing when reading the caller. Because the caller passes
>>> FPB_IGNORE_SOFT_DIRTY and that actually means DON'T ignore soft dirty.
>>>
>>> Can't we just pass in the flags we want?
>> Yup that is cleaner.
>>
>>>> - if (!folio || !folio_test_large(folio) || (max_nr_ptes == 1))
>>>> + if (!folio || !folio_test_large(folio))
>>> What's the rational for dropping the max_nr_ptes == 1 condition? If you don't
>>> need it, why did you add it in the earler patch?
>> Stupid me forgot to drop it from the earlier patch.
>>
>>>> return 1;
>>>> return folio_pte_batch(folio, addr, ptep, pte, max_nr_ptes, flags,
>>>> @@ -154,7 +212,8 @@ static int prot_numa_skip_ptes(struct folio **foliop,
>>>> struct vm_area_struct *vma
>>>> }
>>>> skip_batch:
>>>> - nr_ptes = mprotect_folio_pte_batch(folio, addr, pte, oldpte, max_nr_ptes);
>>>> + nr_ptes = mprotect_folio_pte_batch(folio, addr, pte, oldpte,
>>>> + max_nr_ptes, 0);
>>>> out:
>>>> *foliop = folio;
>>>> return nr_ptes;
>>>> @@ -191,7 +250,10 @@ static long change_pte_range(struct mmu_gather *tlb,
>>>> if (pte_present(oldpte)) {
>>>> int max_nr_ptes = (end - addr) >> PAGE_SHIFT;
>>>> struct folio *folio = NULL;
>>>> - pte_t ptent;
>>>> + int sub_nr_ptes, pgidx = 0;
>>>> + pte_t ptent, newpte;
>>>> + bool sub_set_write;
>>>> + int set_write;
>>>> /*
>>>> * Avoid trapping faults against the zero or KSM
>>>> @@ -206,6 +268,11 @@ static long change_pte_range(struct mmu_gather *tlb,
>>>> continue;
>>>> }
>>>> + if (!folio)
>>>> + folio = vm_normal_folio(vma, addr, oldpte);
>>>> +
>>>> + nr_ptes = mprotect_folio_pte_batch(folio, addr, pte, oldpte,
>>>> + max_nr_ptes, FPB_IGNORE_SOFT_DIRTY);
>>> From the other thread, my memory is jogged that this function ignores write
>>> permission bit. So I think that's opening up a bug when applied here? If the
>>> first pte is writable but the rest are not (COW), doesn't this now make them all
>>> writable? I don't *think* that's a problem for the prot_numa use, but I could be
>>> wrong.
>> No, we are not ignoring the write permission bit. There is no way currently to
>> do that via folio_pte_batch. So the pte batch is either entirely writable or
>> entirely not.
> How are you enforcing that then? Surely folio_pte_batch() is the only thing
> looking at the individual PTEs? It's not guaranteed that just because the PTEs
> all belong to a single VMA that the permissions are all the same; you could have
> an RW private anon VMA which has been forked so all set to COW then individual
> PTEs have faulted and broken COW (as an example).
Yup I just replied in the other mail, I missed the pte_mkwrprotect() in folio_pte_batch().
>
>
>>>> oldpte = modify_prot_start_ptes(vma, addr, pte, nr_ptes);
>>> Even if I'm wrong about ignoring write bit being a bug, I don't think the docs
>>> for this function permit write bit to be different across the batch?
>>>
>>>> ptent = pte_modify(oldpte, newprot);
>>>> @@ -227,15 +294,39 @@ static long change_pte_range(struct mmu_gather *tlb,
>>>> * example, if a PTE is already dirty and no other
>>>> * COW or special handling is required.
>>>> */
>>>> - if ((cp_flags & MM_CP_TRY_CHANGE_WRITABLE) &&
>>>> - !pte_write(ptent) &&
>>>> - can_change_pte_writable(vma, addr, ptent))
>>>> - ptent = pte_mkwrite(ptent, vma);
>>>> -
>>>> - modify_prot_commit_ptes(vma, addr, pte, oldpte, ptent, nr_ptes);
>>>> - if (pte_needs_flush(oldpte, ptent))
>>>> - tlb_flush_pte_range(tlb, addr, PAGE_SIZE);
>>>> - pages++;
>>>> + set_write = (cp_flags & MM_CP_TRY_CHANGE_WRITABLE) &&
>>>> + !pte_write(ptent);
>>>> + if (set_write)
>>>> + set_write = maybe_change_pte_writable(vma, addr, ptent, folio);
>>> Why not just:
>>> set_write = (cp_flags & MM_CP_TRY_CHANGE_WRITABLE) &&
>>> !pte_write(ptent) &&
>>> maybe_change_pte_writable(...);
>> set_write is an int, which is supposed to span {TRI_MAYBE, TRI_FALSE, TRI_TRUE},
>> whereas the RHS of this statement will always return a boolean.
>>
>> You proposed it like this in your diff; it took hours for my eyes to catch this : )
> Ahh good spot! I don't really love the tristate thing, but couldn't think of
> anything better. So I guess it should really be:
>
> set_write = ((cp_flags & MM_CP_TRY_CHANGE_WRITABLE) &&
> !pte_write(ptent)) ? TRI_MAYBE : TRI_FALSE;
> if (set_write == TRI_MAYBE)
> set_write = maybe_change_pte_writable(...);
>
> That would make it a bit more obvious as to what is going on for the reader?
Nice!
>
> Thanks,
> Ryan
>
>>> ?
>>>
>>>> +
>>>> + while (nr_ptes) {
>>>> + if (set_write == TRI_MAYBE) {
>>>> + sub_nr_ptes = anon_exclusive_batch(folio,
>>>> + pgidx, nr_ptes, &sub_set_write);
>>>> + } else {
>>>> + sub_nr_ptes = nr_ptes;
>>>> + sub_set_write = (set_write == TRI_TRUE);
>>>> + }
>>>> +
>>>> + if (sub_set_write)
>>>> + newpte = pte_mkwrite(ptent, vma);
>>>> + else
>>>> + newpte = ptent;
>>>> +
>>>> + modify_prot_commit_ptes(vma, addr, pte, oldpte,
>>>> + newpte, sub_nr_ptes);
>>>> + if (pte_needs_flush(oldpte, newpte))
>>> What did we conclude with pte_needs_flush()? I thought there was an arch where
>>> it looked dodgy calling this for just the pte at the head of the batch?
>> Powerpc flushes if access bit transitions from set to unset. x86 does that
>> for both dirty and access. Both problems are solved by modify_prot_start_ptes()
>> which collects a/d bits, both in the generic implementation and the arm64
>> implementation.
>>
>>> Thanks,
>>> Ryan
>>>
>>>> + tlb_flush_pte_range(tlb, addr,
>>>> + sub_nr_ptes * PAGE_SIZE);
>>>> +
>>>> + addr += sub_nr_ptes * PAGE_SIZE;
>>>> + pte += sub_nr_ptes;
>>>> + oldpte = pte_advance_pfn(oldpte, sub_nr_ptes);
>>>> + ptent = pte_advance_pfn(ptent, sub_nr_ptes);
>>>> + nr_ptes -= sub_nr_ptes;
>>>> + pages += sub_nr_ptes;
>>>> + pgidx += sub_nr_ptes;
>>>> + }
>>>> } else if (is_swap_pte(oldpte)) {
>>>> swp_entry_t entry = pte_to_swp_entry(oldpte);
>>>> pte_t newpte;
Powered by blists - more mailing lists