[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241021055156.2342564-12-nikunj@amd.com>
Date: Mon, 21 Oct 2024 11:21:54 +0530
From: Nikunj A Dadhania <nikunj@....com>
To: <linux-kernel@...r.kernel.org>, <thomas.lendacky@....com>, <bp@...en8.de>,
<x86@...nel.org>, <kvm@...r.kernel.org>
CC: <mingo@...hat.com>, <tglx@...utronix.de>, <dave.hansen@...ux.intel.com>,
<pgonda@...gle.com>, <seanjc@...gle.com>, <pbonzini@...hat.com>,
<nikunj@....com>, Alexey Makhalov <alexey.makhalov@...adcom.com>, "Juergen
Gross" <jgross@...e.com>, Boris Ostrovsky <boris.ostrovsky@...cle.com>
Subject: [PATCH v13 11/13] tsc: Switch to native sched clock
Although the kernel switches over to stable TSC clocksource instead of PV
clocksource, the scheduler still keeps on using PV clocks as the sched
clock source. This is because the following KVM, Xen and VMWare, switches
the paravirt sched clock handler in their init routines. The HyperV is the
only PV clock source that checks if the platform provides invariant TSC and
does not switch to PV sched clock.
When switching back to stable TSC, restore the scheduler clock to
native_sched_clock().
As the clock selection happens in the stop machine context, schedule
delayed work to update the static_call()
Cc: Alexey Makhalov <alexey.makhalov@...adcom.com>
Cc: Juergen Gross <jgross@...e.com>
Cc: Boris Ostrovsky <boris.ostrovsky@...cle.com>
Signed-off-by: Nikunj A Dadhania <nikunj@....com>
---
arch/x86/kernel/tsc.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 27faf121fb78..38e35cac6c42 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -272,10 +272,25 @@ bool using_native_sched_clock(void)
{
return static_call_query(pv_sched_clock) == native_sched_clock;
}
+
+static void enable_native_sc_work(struct work_struct *work)
+{
+ pr_info("using native sched clock\n");
+ paravirt_set_sched_clock(native_sched_clock);
+}
+static DECLARE_DELAYED_WORK(enable_native_sc, enable_native_sc_work);
+
+static void enable_native_sched_clock(void)
+{
+ if (!using_native_sched_clock())
+ schedule_delayed_work(&enable_native_sc, 0);
+}
#else
u64 sched_clock_noinstr(void) __attribute__((alias("native_sched_clock")));
bool using_native_sched_clock(void) { return true; }
+
+void enable_native_sched_clock(void) { }
#endif
notrace u64 sched_clock(void)
@@ -1157,6 +1172,10 @@ static void tsc_cs_tick_stable(struct clocksource *cs)
static int tsc_cs_enable(struct clocksource *cs)
{
vclocks_set_used(VDSO_CLOCKMODE_TSC);
+
+ /* Restore native_sched_clock() when switching to TSC */
+ enable_native_sched_clock();
+
return 0;
}
--
2.34.1
Powered by blists - more mailing lists