[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20260122015825.873904-1-dave@stgolabs.net>
Date: Wed, 21 Jan 2026 17:58:25 -0800
From: Davidlohr Bueso <dave@...olabs.net>
To: dave.hansen@...ux.intel.com,
peterz@...radead.org,
bp@...en8.de
Cc: hpa@...or.com,
dan.j.williams@...el.com,
Jonathan.Cameron@...wei.com,
dave.jiang@...el.com,
dave@...olabs.net,
linux-kernel@...r.kernel.org,
x86@...nel.org,
linux-cxl@...r.kernel.org
Subject: [PATCH] x86, memregion: Avoid big hammer from cpu_cache_invalidate_memregion()
The reason for getting away with wbinvd_on_all_cpus() was originally
that the users at the time were a one time at boot occurrence, so it
mitigated a lot of the effects of the system-wide disruptiveness and
cache destruction. This has now changed with users such as provisioning
memory through CXL Dynamic Capacity Devices.
Lets instead use clflushopt and only invalidate the range in question.
The performance of course scales poorly with the region size but is
ultimately less invasive.
Signed-off-by: Davidlohr Bueso <dave@...olabs.net>
---
arch/x86/mm/pat/set_memory.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 6c6eb486f7a6..4a1c4f6bec17 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -372,6 +372,19 @@ int cpu_cache_invalidate_memregion(phys_addr_t start, size_t len)
{
if (WARN_ON_ONCE(!cpu_cache_has_invalidate_memregion()))
return -ENXIO;
+
+ if (static_cpu_has(X86_FEATURE_CLFLUSHOPT)) {
+ void *vaddr = memremap(start, len, MEMREMAP_WB);
+
+ if (!vaddr)
+ goto fallback;
+
+ clflush_cache_range(vaddr, len);
+ memunmap(vaddr);
+
+ return 0;
+ }
+fallback:
wbinvd_on_all_cpus();
return 0;
}
--
2.39.5
Powered by blists - more mailing lists