[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250211194407.2577252-5-sohil.mehta@intel.com>
Date: Tue, 11 Feb 2025 19:43:54 +0000
From: Sohil Mehta <sohil.mehta@...el.com>
To: x86@...nel.org,
Dave Hansen <dave.hansen@...ux.intel.com>,
Tony Luck <tony.luck@...el.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Ian Rogers <irogers@...gle.com>,
Adrian Hunter <adrian.hunter@...el.com>,
Kan Liang <kan.liang@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>,
"H . Peter Anvin" <hpa@...or.com>,
"Rafael J . Wysocki" <rafael@...nel.org>,
Len Brown <lenb@...nel.org>,
Andy Lutomirski <luto@...nel.org>,
Viresh Kumar <viresh.kumar@...aro.org>,
Fenghua Yu <fenghua.yu@...el.com>,
Jean Delvare <jdelvare@...e.com>,
Guenter Roeck <linux@...ck-us.net>,
Zhang Rui <rui.zhang@...el.com>,
Andrew Cooper <andrew.cooper3@...rix.com>,
David Laight <david.laight.linux@...il.com>,
Sohil Mehta <sohil.mehta@...el.com>,
linux-perf-users@...r.kernel.org,
linux-kernel@...r.kernel.org,
linux-acpi@...r.kernel.org,
linux-pm@...r.kernel.org,
linux-hwmon@...r.kernel.org
Subject: [PATCH v2 04/17] x86/cpu/intel: Fix the movsl alignment preference for extended Families
The alignment preference for 32-bit movsl based bulk memory move has
been 8-byte for a long time. However this preference is only set for
Family 6 and 15 processors.
Extend the preference to upcoming Family numbers 18 and 19 to maintain
legacy behavior. Also, use a VFM based check instead of switching based
on Family numbers. Refresh the comment to reflect the new check.
Signed-off-by: Sohil Mehta <sohil.mehta@...el.com>
---
v2: Split the patch into two parts. Update commit message.
---
arch/x86/kernel/cpu/intel.c | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 3dce22f00dc3..e5f34a90963e 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -449,23 +449,16 @@ static void intel_workarounds(struct cpuinfo_x86 *c)
(c->x86_stepping < 0x6 || c->x86_stepping == 0xb))
set_cpu_bug(c, X86_BUG_11AP);
-
#ifdef CONFIG_X86_INTEL_USERCOPY
/*
- * Set up the preferred alignment for movsl bulk memory moves
+ * movsl bulk memory moves can be slow when source and dest are not
+ * both 8-byte aligned. PII/PIII only like movsl with 8-byte alignment.
+ *
+ * Set the preferred alignment for Pentium Pro and newer processors, as
+ * it has only been tested on these.
*/
- switch (c->x86) {
- case 4: /* 486: untested */
- break;
- case 5: /* Old Pentia: untested */
- break;
- case 6: /* PII/PIII only like movsl with 8-byte alignment */
+ if (c->x86_vfm >= INTEL_PENTIUM_PRO)
movsl_mask.mask = 7;
- break;
- case 15: /* P4 is OK down to 8-byte alignment */
- movsl_mask.mask = 7;
- break;
- }
#endif
intel_smp_check(c);
--
2.43.0
Powered by blists - more mailing lists