[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250106124633.1418972-13-nikunj@amd.com>
Date: Mon, 6 Jan 2025 18:16:32 +0530
From: Nikunj A Dadhania <nikunj@....com>
To: <linux-kernel@...r.kernel.org>, <thomas.lendacky@....com>, <bp@...en8.de>,
<x86@...nel.org>
CC: <kvm@...r.kernel.org>, <mingo@...hat.com>, <tglx@...utronix.de>,
<dave.hansen@...ux.intel.com>, <pgonda@...gle.com>, <seanjc@...gle.com>,
<pbonzini@...hat.com>, <nikunj@....com>, <francescolavra.fl@...il.com>,
Alexey Makhalov <alexey.makhalov@...adcom.com>, Juergen Gross
<jgross@...e.com>, Boris Ostrovsky <boris.ostrovsky@...cle.com>
Subject: [PATCH v16 12/13] x86/tsc: Switch to native sched clock
Although the kernel switches over to stable TSC clocksource instead of
PV clocksource, the scheduler still keeps on using PV clocks as the
sched clock source. This is because KVM, Xen and VMWare, switch the
paravirt sched clock handler in their init routines. HyperV is the
only PV clock source that checks if the platform provides an invariant
TSC and does not switch to PV sched clock.
When switching back to stable TSC, restore the scheduler clock to
native_sched_clock().
As the clock selection happens in the stop machine context, schedule
delayed work to update the static_call()
Cc: Alexey Makhalov <alexey.makhalov@...adcom.com>
Cc: Juergen Gross <jgross@...e.com>
Cc: Boris Ostrovsky <boris.ostrovsky@...cle.com>
Signed-off-by: Nikunj A Dadhania <nikunj@....com>
---
arch/x86/kernel/tsc.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 88d8bfceea04..fe7a0b1b7cfd 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -291,12 +291,26 @@ static void __init upgrade_clock_rating(struct clocksource *tsc_early,
tsc->rating = 450;
}
}
+
+static void enable_native_sc_work(struct work_struct *work)
+{
+ pr_info("Using native sched clock\n");
+ paravirt_set_sched_clock(native_sched_clock);
+}
+static DECLARE_DELAYED_WORK(enable_native_sc, enable_native_sc_work);
+
+static void enable_native_sched_clock(void)
+{
+ if (!using_native_sched_clock())
+ schedule_delayed_work(&enable_native_sc, 0);
+}
#else
u64 sched_clock_noinstr(void) __attribute__((alias("native_sched_clock")));
bool using_native_sched_clock(void) { return true; }
static void __init upgrade_clock_rating(struct clocksource *tsc_early, struct clocksource *tsc) { }
+static void enable_native_sched_clock(void) { }
#endif
notrace u64 sched_clock(void)
@@ -1176,6 +1190,10 @@ static void tsc_cs_tick_stable(struct clocksource *cs)
static int tsc_cs_enable(struct clocksource *cs)
{
vclocks_set_used(VDSO_CLOCKMODE_TSC);
+
+ /* Restore native_sched_clock() when switching to TSC */
+ enable_native_sched_clock();
+
return 0;
}
--
2.34.1
Powered by blists - more mailing lists