[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251114151428.1064524-11-vschneid@redhat.com>
Date: Fri, 14 Nov 2025 16:14:28 +0100
From: Valentin Schneider <vschneid@...hat.com>
To: linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
rcu@...r.kernel.org,
x86@...nel.org,
linux-arm-kernel@...ts.infradead.org,
loongarch@...ts.linux.dev,
linux-riscv@...ts.infradead.org,
linux-arch@...r.kernel.org,
linux-trace-kernel@...r.kernel.org
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Josh Poimboeuf <jpoimboe@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Arnd Bergmann <arnd@...db.de>,
Frederic Weisbecker <frederic@...nel.org>,
"Paul E. McKenney" <paulmck@...nel.org>,
Jason Baron <jbaron@...mai.com>,
Steven Rostedt <rostedt@...dmis.org>,
Ard Biesheuvel <ardb@...nel.org>,
Sami Tolvanen <samitolvanen@...gle.com>,
"David S. Miller" <davem@...emloft.net>,
Neeraj Upadhyay <neeraj.upadhyay@...nel.org>,
Joel Fernandes <joelagnelf@...dia.com>,
Josh Triplett <josh@...htriplett.org>,
Boqun Feng <boqun.feng@...il.com>,
Uladzislau Rezki <urezki@...il.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Mel Gorman <mgorman@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Masahiro Yamada <masahiroy@...nel.org>,
Han Shen <shenhan@...gle.com>,
Rik van Riel <riel@...riel.com>,
Jann Horn <jannh@...gle.com>,
Dan Carpenter <dan.carpenter@...aro.org>,
Oleg Nesterov <oleg@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Clark Williams <williams@...hat.com>,
Yair Podemsky <ypodemsk@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Daniel Wagner <dwagner@...e.de>,
Petr Tesarik <ptesarik@...e.com>,
Shrikanth Hegde <sshegde@...ux.ibm.com>
Subject: [RFC PATCH v7 31/31] x86/entry: Add an option to coalesce TLB flushes
Previous patches have introduced a mechanism to prevent kernel text updates
from inducing interference on isolated CPUs. A similar action is required
for kernel-range TLB flushes in order to silence the biggest remaining
cause of isolated CPU IPI interference.
These flushes are mostly caused by vmalloc manipulations - e.g. on x86 with
CONFIG_VMAP_STACK, spawning enough processes will easily trigger
flushes. Unfortunately, the newly added context_tracking IPI deferral
mechanism cannot be leveraged for TLB flushes, as the deferred work would
be executed too late. Consider the following execution flow:
<userspace>
!interrupt!
SWITCH_TO_KERNEL_CR3 // vmalloc range becomes accessible
idtentry_func_foo()
irqentry_enter()
irqentry_enter_from_user_mode()
enter_from_user_mode()
[...]
ct_kernel_enter_state()
ct_work_flush() // deferred flush would be done here
Since there is no sane way to assert no stale entry is accessed during
kernel entry, any code executed between SWITCH_TO_KERNEL_CR3 and
ct_work_flush() is at risk of accessing a stale entry. Dave had suggested
hacking up something within SWITCH_TO_KERNEL_CR3 itself, which is what has
been implemented in the previous patches.
Make kernel-range TLB flush deferral available via CONFIG_COALESCE_TLBI.
Signed-off-by: Valentin Schneider <vschneid@...hat.com>
---
arch/x86/Kconfig | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index fa9229c0e0939..04f9d6496bbbc 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2189,6 +2189,23 @@ config ADDRESS_MASKING
The capability can be used for efficient address sanitizers (ASAN)
implementation and for optimizations in JITs.
+config COALESCE_TLBI
+ def_bool n
+ prompt "Coalesce kernel TLB flushes for NOHZ-full CPUs"
+ depends on X86_64 && MITIGATION_PAGE_TABLE_ISOLATION && NO_HZ_FULL
+ help
+ TLB flushes for kernel addresses can lead to IPIs being sent to
+ NOHZ-full CPUs, thus kicking them out of userspace.
+
+ This option coalesces kernel-range TLB flushes for NOHZ-full CPUs into
+ a single flush executed at kernel entry, right after switching to the
+ kernel page table. Note that this flush is unconditionnal, even if no
+ remote flush was issued during the previous userspace execution window.
+
+ This obviously makes the user->kernel transition overhead even worse.
+
+ If unsure, say N.
+
config HOTPLUG_CPU
def_bool y
depends on SMP
--
2.51.0
Powered by blists - more mailing lists