[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <05f3517a-754e-40e3-a0e1-bc654f6ed3c9@redhat.com>
Date: Tue, 1 Jul 2025 18:17:21 +0200
From: David Hildenbrand <david@...hat.com>
To: Harry Yoo <harry.yoo@...cle.com>, Barry Song <21cnbao@...il.com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
baolin.wang@...ux.alibaba.com, chrisl@...nel.org, ioworker0@...il.com,
kasong@...cent.com, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-riscv@...ts.infradead.org,
lorenzo.stoakes@...cle.com, ryan.roberts@....com, v-songbaohua@...o.com,
x86@...nel.org, ying.huang@...el.com, zhengtangquan@...o.com
Subject: Re: [PATCH v4 3/4] mm: Support batched unmap for lazyfree large
folios during reclamation
>>> + /* Nuke the page table entry. */
>>> + pteval = get_and_clear_full_ptes(mm, address, pvmw.pte, nr_pages, 0);
>>> + /*
>>> + * We clear the PTE but do not flush so potentially
>>> + * a remote CPU could still be writing to the folio.
>>> + * If the entry was previously clean then the
>>> + * architecture must guarantee that a clear->dirty
>>> + * transition on a cached TLB entry is written through
>>> + * and traps if the PTE is unmapped.
>>> + */
>>> + if (should_defer_flush(mm, flags))
>>> + set_tlb_ubc_flush_pending(mm, pteval, address, end_addr);
>>
>> When the first pte of a PTE-mapped THP has _PAGE_PROTNONE bit set
>> (by NUMA balancing), can set_tlb_ubc_flush_pending() mistakenly think that
>> it doesn't need to flush the whole range, although some ptes in the range
>> doesn't have _PAGE_PROTNONE bit set?
>
> No, then folio_pte_batch() should have returned nr < folio_nr_pages(folio).
Right, folio_pte_batch() does currently not batch across PTEs that
differ in pte_protnone().
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists