[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251113105227.57650-2-qq570070308@gmail.com>
Date: Thu, 13 Nov 2025 18:52:25 +0800
From: Xie Yuanbin <qq570070308@...il.com>
To: tglx@...utronix.de,
riel@...riel.com,
segher@...nel.crashing.org,
david@...hat.com,
peterz@...radead.org,
hpa@...or.com,
osalvador@...e.de,
linux@...linux.org.uk,
mathieu.desnoyers@...icios.com,
paulmck@...nel.org,
pjw@...nel.org,
palmer@...belt.com,
aou@...s.berkeley.edu,
alex@...ti.fr,
hca@...ux.ibm.com,
gor@...ux.ibm.com,
agordeev@...ux.ibm.com,
borntraeger@...ux.ibm.com,
svens@...ux.ibm.com,
davem@...emloft.net,
andreas@...sler.com,
luto@...nel.org,
mingo@...hat.com,
bp@...en8.de,
dave.hansen@...ux.intel.com,
acme@...nel.org,
namhyung@...nel.org,
mark.rutland@....com,
alexander.shishkin@...ux.intel.com,
jolsa@...nel.org,
irogers@...gle.com,
adrian.hunter@...el.com,
james.clark@...aro.org,
anna-maria@...utronix.de,
frederic@...nel.org,
juri.lelli@...hat.com,
vincent.guittot@...aro.org,
dietmar.eggemann@....com,
rostedt@...dmis.org,
bsegall@...gle.com,
mgorman@...e.de,
vschneid@...hat.com,
nathan@...nel.org,
nick.desaulniers+lkml@...il.com,
morbo@...gle.com,
justinstitt@...gle.com,
qq570070308@...il.com,
thuth@...hat.com,
brauner@...nel.org,
arnd@...db.de,
sforshee@...nel.org,
mhiramat@...nel.org,
andrii@...nel.org,
oleg@...hat.com,
jlayton@...nel.org,
aalbersh@...hat.com,
akpm@...ux-foundation.org,
david@...nel.org,
lorenzo.stoakes@...cle.com,
baolin.wang@...ux.alibaba.com,
max.kellermann@...os.com,
ryan.roberts@....com,
nysal@...ux.ibm.com,
urezki@...il.com
Cc: x86@...nel.org,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org,
linux-riscv@...ts.infradead.org,
linux-s390@...r.kernel.org,
sparclinux@...r.kernel.org,
linux-perf-users@...r.kernel.org,
llvm@...ts.linux.dev,
will@...nel.org,
kernel test robot <lkp@...el.com>
Subject: [PATCH v3 1/3] Make enter_lazy_tlb inline on x86
This function is very short, and is called in the context switching,
which is the hot code path.
Change it to inline function on x86 to optimize performance, just like
its code on other architectures.
Signed-off-by: Xie Yuanbin <qq570070308@...il.com>
Reviewed-by: Rik van Riel <riel@...riel.com>
Reported-by: kernel test robot <lkp@...el.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202511091959.kfmo9kPB-lkp@intel.com/
Closes: https://lore.kernel.org/oe-kbuild-all/202511092219.73aMMES4-lkp@intel.com/
Closes: https://lore.kernel.org/oe-kbuild-all/202511100042.ZklpqjOY-lkp@intel.com/
---
arch/x86/include/asm/mmu_context.h | 23 ++++++++++++++++++++++-
arch/x86/mm/tlb.c | 21 ---------------------
2 files changed, 22 insertions(+), 22 deletions(-)
diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 73bf3b1b44e8..ecd134dcfb34 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -136,8 +136,29 @@ static inline void mm_reset_untag_mask(struct mm_struct *mm)
}
#endif
+/*
+ * Please ignore the name of this function. It should be called
+ * switch_to_kernel_thread().
+ *
+ * enter_lazy_tlb() is a hint from the scheduler that we are entering a
+ * kernel thread or other context without an mm. Acceptable implementations
+ * include doing nothing whatsoever, switching to init_mm, or various clever
+ * lazy tricks to try to minimize TLB flushes.
+ *
+ * The scheduler reserves the right to call enter_lazy_tlb() several times
+ * in a row. It will notify us that we're going back to a real mm by
+ * calling switch_mm_irqs_off().
+ */
#define enter_lazy_tlb enter_lazy_tlb
-extern void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk);
+#ifndef MODULE
+static __always_inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
+{
+ if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm)
+ return;
+
+ this_cpu_write(cpu_tlbstate_shared.is_lazy, true);
+}
+#endif
#define mm_init_global_asid mm_init_global_asid
extern void mm_init_global_asid(struct mm_struct *mm);
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 5d221709353e..cb715e8e75e4 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -970,27 +970,6 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
}
}
-/*
- * Please ignore the name of this function. It should be called
- * switch_to_kernel_thread().
- *
- * enter_lazy_tlb() is a hint from the scheduler that we are entering a
- * kernel thread or other context without an mm. Acceptable implementations
- * include doing nothing whatsoever, switching to init_mm, or various clever
- * lazy tricks to try to minimize TLB flushes.
- *
- * The scheduler reserves the right to call enter_lazy_tlb() several times
- * in a row. It will notify us that we're going back to a real mm by
- * calling switch_mm_irqs_off().
- */
-void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
-{
- if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm)
- return;
-
- this_cpu_write(cpu_tlbstate_shared.is_lazy, true);
-}
-
/*
* Using a temporary mm allows to set temporary mappings that are not accessible
* by other CPUs. Such mappings are needed to perform sensitive memory writes
--
2.51.0
Powered by blists - more mailing lists