[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <90c1ff67-46fb-4ddd-9bdd-43633f89dda2@linux.dev>
Date: Sun, 21 Dec 2025 18:43:32 +0800
From: Lance Yang <lance.yang@...ux.dev>
To: "David Hildenbrand (Red Hat)" <david@...nel.org>
Cc: will@...nel.org, aneesh.kumar@...nel.org, npiggin@...il.com,
peterz@...radead.org, tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com, arnd@...db.de,
lorenzo.stoakes@...cle.com, ziy@...dia.com, baolin.wang@...ux.alibaba.com,
Liam.Howlett@...cle.com, npache@...hat.com, ryan.roberts@....com,
dev.jain@....com, baohua@...nel.org, ioworker0@...il.com,
shy828301@...il.com, riel@...riel.com, jannh@...gle.com,
linux-arch@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org
Subject: Re: [PATCH RFC 3/3] mm/khugepaged: skip redundant IPI in
collapse_huge_page()
On 2025/12/19 16:25, David Hildenbrand (Red Hat) wrote:
> On 12/18/25 15:35, Lance Yang wrote:
>>
>>
>> On 2025/12/18 21:13, David Hildenbrand (Red Hat) wrote:
>>> On 12/13/25 09:00, Lance Yang wrote:
>>>> From: Lance Yang <lance.yang@...ux.dev>
>>>>
>>>> Similar to the hugetlb PMD unsharing optimization, skip the second IPI
>>>> in collapse_huge_page() when the TLB flush already provides necessary
>>>> synchronization.
>>>>
>>>> Before commit a37259732a7d ("x86/mm: Make MMU_GATHER_RCU_TABLE_FREE
>>>> unconditional"), bare metal x86 didn't enable
>>>> MMU_GATHER_RCU_TABLE_FREE.
>>>> In that configuration, tlb_remove_table_sync_one() was a NOP. GUP-fast
>>>> synchronization relied on IRQ disabling, which blocks TLB flush IPIs.
>>>>
>>>> When Rik made MMU_GATHER_RCU_TABLE_FREE unconditional to support AMD's
>>>> INVLPGB, all x86 systems started sending the second IPI. However, on
>>>> native x86 this is redundant:
>>>>
>>>> - pmdp_collapse_flush() calls flush_tlb_range(), sending IPIs to
>>>> all
>>>> CPUs to invalidate TLB entries
>>>>
>>>> - GUP-fast runs with IRQs disabled, so when the flush IPI
>>>> completes,
>>>> any concurrent GUP-fast must have finished
>>>>
>>>> - tlb_remove_table_sync_one() provides no additional
>>>> synchronization
>>>>
>>>> On x86, skip the second IPI when running native (without paravirt) and
>>>> without INVLPGB. For paravirt with non-native flush_tlb_multi and for
>>>> INVLPGB, conservatively keep both IPIs.
>>>>
>>>> Use tlb_table_flush_implies_ipi_broadcast(), consistent with the
>>>> hugetlb
>>>> optimization.
>>>>
>>>> Suggested-by: David Hildenbrand (Red Hat) <david@...nel.org>
>>>> Signed-off-by: Lance Yang <lance.yang@...ux.dev>
>>>> ---
>>>> mm/khugepaged.c | 7 ++++++-
>>>> 1 file changed, 6 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>>> index 97d1b2824386..06ea793a8190 100644
>>>> --- a/mm/khugepaged.c
>>>> +++ b/mm/khugepaged.c
>>>> @@ -1178,7 +1178,12 @@ static int collapse_huge_page(struct mm_struct
>>>> *mm, unsigned long address,
>>>> _pmd = pmdp_collapse_flush(vma, address, pmd);
>>>> spin_unlock(pmd_ptl);
>>>> mmu_notifier_invalidate_range_end(&range);
>>>> - tlb_remove_table_sync_one();
>>>> + /*
>>>> + * Skip the second IPI if the TLB flush above already synchronized
>>>> + * with concurrent GUP-fast via broadcast IPIs.
>>>> + */
>>>> + if (!tlb_table_flush_implies_ipi_broadcast())
>>>> + tlb_remove_table_sync_one();
>>>
>>> We end up calling
>>>
>>> flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
>>>
>>> -> flush_tlb_mm_range(freed_tables = true)
>>>
>>> -> flush_tlb_multi(mm_cpumask(mm), info);
>>>
>>> So freed_tables=true and we should be doing the right thing.
>>
>> Yep ;)
>>
>>> BTW, I was wondering whether we should embed that
>>> tlb_table_flush_implies_ipi_broadcast() check in
>>> tlb_remove_table_sync_one() instead.
>>> It then relies on the caller to do the right thing (flush with
>>> freed_tables=true or unshared_tables = true).
>>>
>>> Thoughts?
>>
>> Good point! Let me check the other callers to ensure they
>> are all preceded by a flush with freed_tables=true (or unshared_tables).
>>
>> Will get back to you with what I find :)
>
> The use case in tlb_table_flush() is a bit confusing. But I would assume
> that we have a TLB flush with remove_tables=true beforehand. Otherwise
> we cannot possibly free the page table.
Right! I assume you meant freed_tables=true (not remove_tables) ;)
Verified all callers have proper TLB flushes *beforehand*:
-> 1. mm/khugepaged.c:1188 (collapse_huge_page)
pmdp_collapse_flush(vma, address, pmd)
-> flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE)
-> flush_tlb_mm_range(mm, ..., freed_tables = true)
-> flush_tlb_multi(mm_cpumask(mm), info)
So freed_tables=true and we should be doing the right thing :)
-> 2. include/asm-generic/tlb.h:861 (tlb_flush_unshared_tables)
tlb_flush_mmu_tlbonly(tlb)
-> tlb_flush(tlb)
-> flush_tlb_mm_range(mm, ..., unshared_tables = true)
-> flush_tlb_multi(mm_cpumask(mm), info)
unshared_tables=true (equivalent to freed_tables for sending IPIs).
-> 3. mm/mmu_gather.c:341 (__tlb_remove_table_one)
When we can't allocate a batch page in tlb_remove_table(), we do:
tlb_table_invalidate(tlb)
-> tlb_flush_mmu_tlbonly(tlb)
-> flush_tlb_mm_range(mm, ..., freed_tables = true)
-> flush_tlb_multi(mm_cpumask(mm), info)
Then:
tlb_remove_table_one(table)
-> __tlb_remove_table_one(table) // if !CONFIG_PT_RECLAIM
-> tlb_remove_table_sync_one()
freed_tables=true, and this should work too.
Why is tlb->freed_tables guaranteed? Because callers like pte_free_tlb()
(via free_pte_range) set freed_tables=true before calling __pte_free_tlb(),
which then calls tlb_remove_table(). As you mentioned, we cannot free page
tables without freed_tables=true.
Note that tlb_remove_table_sync_one() was a NOP on bare metal x86
(CONFIG_MMU_GATHER_RCU_TABLE_FREE=n) before commit a37259732a7d.
-> 4-5. mm/khugepaged.c:1683,1819 (pmdp_get_lockless_sync macro)
Same as #1.
So all callers satisfy the requirement! Will embed the check in v2.
Hopefully I didn't miss any callers ;)
Cheers,
Lance
Powered by blists - more mailing lists