[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260203112401.3889029-12-zhouchuyi@bytedance.com>
Date: Tue, 3 Feb 2026 19:24:01 +0800
From: "Chuyi Zhou" <zhouchuyi@...edance.com>
To: <tglx@...utronix.de>, <mingo@...hat.com>, <luto@...nel.org>,
<peterz@...radead.org>, <paulmck@...nel.org>, <muchun.song@...ux.dev>,
<bp@...en8.de>, <dave.hansen@...ux.intel.com>
Cc: <linux-kernel@...r.kernel.org>, "Chuyi Zhou" <zhouchuyi@...edance.com>
Subject: [PATCH 11/11] x86/mm: Enable preemption during flush_tlb_kernel_range
flush_tlb_kernel_range() is invoked when kernel memory mapping changes.
On x86 platforms without the INVLPGB feature enabled, we need to send IPIs
to every online CPU and synchronously wait for them to complete
do_kernel_range_flush(). This process can be time-consuming due to factors
such as a large number of CPUs or other issues (like interrupts being
disabled). flush_tlb_kernel_range() always disables preemption, this may
affect the scheduling latency of other tasks on the current CPU.
Previous patch convert flush_tlb_info from per-cpu variable to on-stack
variable. Additionally, it's no longer necessary to explicitly disable
preemption before calling smp_call*() since they internally handles the
preemption logic. Now is's safe to enable preemption during
flush_tlb_kernel_range().
Signed-off-by: Chuyi Zhou <zhouchuyi@...edance.com>
---
arch/x86/mm/tlb.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 4162d7ff024f..f0de6c1e387f 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1467,6 +1467,8 @@ static void invlpgb_kernel_range_flush(struct flush_tlb_info *info)
{
unsigned long addr, nr;
+ guard(preempt)();
+
for (addr = info->start; addr < info->end; addr += nr << PAGE_SHIFT) {
nr = (info->end - addr) >> PAGE_SHIFT;
@@ -1517,7 +1519,7 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
.new_tlb_gen = TLB_GENERATION_INVALID
};
- guard(preempt)();
+ guard(migrate)();
if ((end - start) >> PAGE_SHIFT > tlb_single_page_flush_ceiling) {
start = 0;
--
2.20.1
Powered by blists - more mailing lists