[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250106124633.1418972-12-nikunj@amd.com>
Date: Mon, 6 Jan 2025 18:16:31 +0530
From: Nikunj A Dadhania <nikunj@....com>
To: <linux-kernel@...r.kernel.org>, <thomas.lendacky@....com>, <bp@...en8.de>,
<x86@...nel.org>
CC: <kvm@...r.kernel.org>, <mingo@...hat.com>, <tglx@...utronix.de>,
<dave.hansen@...ux.intel.com>, <pgonda@...gle.com>, <seanjc@...gle.com>,
<pbonzini@...hat.com>, <nikunj@....com>, <francescolavra.fl@...il.com>,
Alexey Makhalov <alexey.makhalov@...adcom.com>, Juergen Gross
<jgross@...e.com>, Boris Ostrovsky <boris.ostrovsky@...cle.com>
Subject: [PATCH v16 11/13] x86/tsc: Upgrade TSC clocksource rating for guests
Hypervisor platform setup (x86_hyper_init::init_platform) routines register
their own PV clock sources (KVM, HyperV, and Xen) at different clock
ratings, resulting in PV clocksource being selected even when a stable TSC
clocksource is available. Upgrade the clock rating of the TSC early and
regular clocksource to prefer TSC over PV clock sources when TSC is
invariant, non-stop, and stable
Cc: Alexey Makhalov <alexey.makhalov@...adcom.com>
Cc: Juergen Gross <jgross@...e.com>
Cc: Boris Ostrovsky <boris.ostrovsky@...cle.com>
Suggested-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Nikunj A Dadhania <nikunj@....com>
---
arch/x86/kernel/tsc.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 34dec0b72ea8..88d8bfceea04 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -274,10 +274,29 @@ bool using_native_sched_clock(void)
{
return static_call_query(pv_sched_clock) == native_sched_clock;
}
+
+/*
+ * Upgrade the clock rating for TSC early and regular clocksource when the
+ * underlying platform provides non-stop, invariant, and stable TSC. TSC
+ * early/regular clocksource will be preferred over other PV clock sources.
+ */
+static void __init upgrade_clock_rating(struct clocksource *tsc_early,
+ struct clocksource *tsc)
+{
+ if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR) &&
+ cpu_feature_enabled(X86_FEATURE_CONSTANT_TSC) &&
+ cpu_feature_enabled(X86_FEATURE_NONSTOP_TSC) &&
+ !tsc_unstable) {
+ tsc_early->rating = 449;
+ tsc->rating = 450;
+ }
+}
#else
u64 sched_clock_noinstr(void) __attribute__((alias("native_sched_clock")));
bool using_native_sched_clock(void) { return true; }
+
+static void __init upgrade_clock_rating(struct clocksource *tsc_early, struct clocksource *tsc) { }
#endif
notrace u64 sched_clock(void)
@@ -1564,6 +1583,8 @@ void __init tsc_init(void)
if (tsc_clocksource_reliable || no_tsc_watchdog)
tsc_disable_clocksource_watchdog();
+ upgrade_clock_rating(&clocksource_tsc_early, &clocksource_tsc);
+
clocksource_register_khz(&clocksource_tsc_early, tsc_khz);
detect_art();
}
--
2.34.1
Powered by blists - more mailing lists