[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260124171546.43398-2-qq570070308@gmail.com>
Date: Sun, 25 Jan 2026 01:15:44 +0800
From: Xie Yuanbin <qq570070308@...il.com>
To: peterz@...radead.org,
tglx@...nel.org,
riel@...riel.com,
segher@...nel.crashing.org,
david@...nel.org,
hpa@...or.com,
arnd@...db.de,
mingo@...hat.com,
juri.lelli@...hat.com,
vincent.guittot@...aro.org,
dietmar.eggemann@....com,
rostedt@...dmis.org,
bsegall@...gle.com,
mgorman@...e.de,
vschneid@...hat.com,
bp@...en8.de,
dave.hansen@...ux.intel.com,
luto@...nel.org,
houwenlong.hwl@...group.com
Cc: linux-kernel@...r.kernel.org,
x86@...nel.org,
Xie Yuanbin <qq570070308@...il.com>
Subject: [PATCH v6 1/3] x86/mm/tlb: Make enter_lazy_tlb() always inline on x86
enter_lazy_tlb() on x86 is short enough, and is called in context
switching, which is the hot code path.
Make enter_lazy_tlb() always inline on x86 to optimize performance.
Signed-off-by: Xie Yuanbin <qq570070308@...il.com>
Reviewed-by: Rik van Riel <riel@...riel.com>
Cc: Thomas Gleixner <tglx@...nel.org>
Cc: David Hildenbrand (Red Hat) <david@...nel.org>
---
arch/x86/include/asm/mmu_context.h | 23 ++++++++++++++++++++++-
arch/x86/mm/tlb.c | 21 ---------------------
2 files changed, 22 insertions(+), 22 deletions(-)
diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 1acafb1c6a93..ec3f9bebcf7b 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -136,8 +136,29 @@ static inline void mm_reset_untag_mask(struct mm_struct *mm)
}
#endif
+/*
+ * Please ignore the name of this function. It should be called
+ * switch_to_kernel_thread().
+ *
+ * enter_lazy_tlb() is a hint from the scheduler that we are entering a
+ * kernel thread or other context without an mm. Acceptable implementations
+ * include doing nothing whatsoever, switching to init_mm, or various clever
+ * lazy tricks to try to minimize TLB flushes.
+ *
+ * The scheduler reserves the right to call enter_lazy_tlb() several times
+ * in a row. It will notify us that we're going back to a real mm by
+ * calling switch_mm_irqs_off().
+ */
+#ifndef MODULE
+static __always_inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
+{
+ if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm)
+ return;
+
+ this_cpu_write(cpu_tlbstate_shared.is_lazy, true);
+}
+#endif
#define enter_lazy_tlb enter_lazy_tlb
-extern void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk);
extern void mm_init_global_asid(struct mm_struct *mm);
extern void mm_free_global_asid(struct mm_struct *mm);
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 621e09d049cb..af43d177087e 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -971,27 +971,6 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
}
}
-/*
- * Please ignore the name of this function. It should be called
- * switch_to_kernel_thread().
- *
- * enter_lazy_tlb() is a hint from the scheduler that we are entering a
- * kernel thread or other context without an mm. Acceptable implementations
- * include doing nothing whatsoever, switching to init_mm, or various clever
- * lazy tricks to try to minimize TLB flushes.
- *
- * The scheduler reserves the right to call enter_lazy_tlb() several times
- * in a row. It will notify us that we're going back to a real mm by
- * calling switch_mm_irqs_off().
- */
-void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
-{
- if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm)
- return;
-
- this_cpu_write(cpu_tlbstate_shared.is_lazy, true);
-}
-
/*
* Using a temporary mm allows to set temporary mappings that are not accessible
* by other CPUs. Such mappings are needed to perform sensitive memory writes
--
2.51.0
Powered by blists - more mailing lists