[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dafd2f83-c242-4d60-8270-8e52e2e066e6@linux.dev>
Date: Wed, 31 Dec 2025 11:03:40 +0800
From: Lance Yang <lance.yang@...ux.dev>
To: "David Hildenbrand (Red Hat)" <david@...nel.org>,
akpm@...ux-foundation.org
Cc: will@...nel.org, aneesh.kumar@...nel.org, npiggin@...il.com,
peterz@...radead.org, tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com, arnd@...db.de,
lorenzo.stoakes@...cle.com, ziy@...dia.com, baolin.wang@...ux.alibaba.com,
Liam.Howlett@...cle.com, npache@...hat.com, ryan.roberts@....com,
dev.jain@....com, baohua@...nel.org, ioworker0@...il.com,
shy828301@...il.com, riel@...riel.com, jannh@...gle.com,
linux-arch@...r.kernel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/3] mm: embed TLB flush IPI check in
tlb_remove_table_sync_one()
On 2025/12/31 04:33, David Hildenbrand (Red Hat) wrote:
> On 12/29/25 15:52, Lance Yang wrote:
>> From: Lance Yang <lance.yang@...ux.dev>
>>
>> Embed the tlb_table_flush_implies_ipi_broadcast() check directly inside
>> tlb_remove_table_sync_one() instead of requiring every caller to check
>> it explicitly. This relies on callers to do the right thing: flush with
>> freed_tables=true or unshared_tables=true beforehand.
>>
>> All existing callers satisfy this requirement:
>>
>> 1. mm/khugepaged.c:1188 (collapse_huge_page):
>>
>> pmdp_collapse_flush(vma, address, pmd)
>> -> flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE)
>> -> flush_tlb_mm_range(mm, ..., freed_tables = true)
>> -> flush_tlb_multi(mm_cpumask(mm), info)
>>
>> So freed_tables=true before calling tlb_remove_table_sync_one().
>>
>> 2. include/asm-generic/tlb.h:861 (tlb_flush_unshared_tables):
>>
>> tlb_flush_mmu_tlbonly(tlb)
>> -> tlb_flush(tlb)
>> -> flush_tlb_mm_range(mm, ..., unshared_tables = true)
>> -> flush_tlb_multi(mm_cpumask(mm), info)
>>
>> unshared_tables=true (equivalent to freed_tables for sending IPIs).
>>
>> 3. mm/mmu_gather.c:341 (__tlb_remove_table_one):
>>
>> When we can't allocate a batch page in tlb_remove_table(), we do:
>>
>> tlb_table_invalidate(tlb)
>> -> tlb_flush_mmu_tlbonly(tlb)
>> -> flush_tlb_mm_range(mm, ..., freed_tables = true)
>> -> flush_tlb_multi(mm_cpumask(mm), info)
>>
>> Then:
>> tlb_remove_table_one(table)
>> -> __tlb_remove_table_one(table) // if !CONFIG_PT_RECLAIM
>> -> tlb_remove_table_sync_one()
>>
>> freed_tables=true, and this should work too.
>>
>> Why is tlb->freed_tables guaranteed? Because callers like
>> pte_free_tlb() (via free_pte_range) set freed_tables=true before
>> calling __pte_free_tlb(), which then calls tlb_remove_table().
>> We cannot free page tables without freed_tables=true.
>>
>> Note that tlb_remove_table_sync_one() was a NOP on bare metal x86
>> (CONFIG_MMU_GATHER_RCU_TABLE_FREE=n) before commit a37259732a7d
>> ("x86/mm: Make MMU_GATHER_RCU_TABLE_FREE unconditional").
>>
>> 4-5. mm/khugepaged.c:1683,1819 (pmdp_get_lockless_sync macro):
>>
>> Same as #1. These also use pmdp_collapse_flush() beforehand.
>>
>> Suggested-by: David Hildenbrand (Red Hat) <david@...nel.org>
>> Signed-off-by: Lance Yang <lance.yang@...ux.dev>
>
> LGTM. I think we should document that somewhere. Can we add some
Thanks!
> kerneldoc for tlb_remove_table_sync_one() where we document that it
> doesn't to any sync if a previous TLB flush when removing/unsharing page
> tables would have already performed an IPI?
Fair point. Would something like this work?
---8<---
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 7b588643cbae..9139f0a6b8bd 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -274,6 +274,20 @@ static void tlb_remove_table_smp_sync(void *arg)
/* Simply deliver the interrupt */
}
+/**
+ * tlb_remove_table_sync_one - Send IPI to synchronize page table
operations
+ *
+ * Sends an IPI to all CPUs to synchronize when freeing or unsharing page
+ * tables (e.g., to ensure concurrent GUP-fast walkers have completed).
+ *
+ * If a previous TLB flush (when removing/unsharing page tables) already
+ * broadcast IPIs to all CPUs, the redundant IPI is skipped. The
optimization
+ * relies on architectures implementing
tlb_table_flush_implies_ipi_broadcast()
+ * to indicate when their TLB flush provides sufficient synchronization.
+ *
+ * Note that callers must ensure that a TLB flush with freed_tables=true or
+ * unshared_tables=true has been performed before calling.
+ */
void tlb_remove_table_sync_one(void)
{
/* Skip the IPI if the TLB flush already synchronized with other CPUs. */
---
Cheers,
Lance
>
>> ---
>> mm/mmu_gather.c | 4 ++++
>> 1 file changed, 4 insertions(+)
>>
>> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
>> index 7468ec388455..7b588643cbae 100644
>> --- a/mm/mmu_gather.c
>> +++ b/mm/mmu_gather.c
>> @@ -276,6 +276,10 @@ static void tlb_remove_table_smp_sync(void *arg)
>> void tlb_remove_table_sync_one(void)
>> {
>> + /* Skip the IPI if the TLB flush already synchronized with other
>> CPUs. */
>> + if (tlb_table_flush_implies_ipi_broadcast())
>> + return;
>> +
>> /*
>> * This isn't an RCU grace period and hence the page-tables
>> cannot be
>> * assumed to be actually RCU-freed.
>
>
Powered by blists - more mailing lists