[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <04684b8a-c8a7-5d4d-de8d-16b389d0c64f@amd.com>
Date: Mon, 21 Oct 2024 09:37:43 -0500
From: Tom Lendacky <thomas.lendacky@....com>
To: Nikunj A Dadhania <nikunj@....com>, linux-kernel@...r.kernel.org,
bp@...en8.de, x86@...nel.org, kvm@...r.kernel.org
Cc: mingo@...hat.com, tglx@...utronix.de, dave.hansen@...ux.intel.com,
pgonda@...gle.com, seanjc@...gle.com, pbonzini@...hat.com,
Alexey Makhalov <alexey.makhalov@...adcom.com>,
Juergen Gross <jgross@...e.com>, Boris Ostrovsky <boris.ostrovsky@...cle.com>
Subject: Re: [PATCH v13 11/13] tsc: Switch to native sched clock
On 10/21/24 00:51, Nikunj A Dadhania wrote:
> Although the kernel switches over to stable TSC clocksource instead of PV
> clocksource, the scheduler still keeps on using PV clocks as the sched
> clock source. This is because the following KVM, Xen and VMWare, switches
s/the following//
s/switches/switch/
> the paravirt sched clock handler in their init routines. The HyperV is the
s/The HyperV/HyperV/
> only PV clock source that checks if the platform provides invariant TSC and
s/provides invariant/provides an invariant/
Thanks,
Tom
> does not switch to PV sched clock.
>
> When switching back to stable TSC, restore the scheduler clock to
> native_sched_clock().
>
> As the clock selection happens in the stop machine context, schedule
> delayed work to update the static_call()
>
> Cc: Alexey Makhalov <alexey.makhalov@...adcom.com>
> Cc: Juergen Gross <jgross@...e.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@...cle.com>
> Signed-off-by: Nikunj A Dadhania <nikunj@....com>
> ---
> arch/x86/kernel/tsc.c | 19 +++++++++++++++++++
> 1 file changed, 19 insertions(+)
>
> diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
> index 27faf121fb78..38e35cac6c42 100644
> --- a/arch/x86/kernel/tsc.c
> +++ b/arch/x86/kernel/tsc.c
> @@ -272,10 +272,25 @@ bool using_native_sched_clock(void)
> {
> return static_call_query(pv_sched_clock) == native_sched_clock;
> }
> +
> +static void enable_native_sc_work(struct work_struct *work)
> +{
> + pr_info("using native sched clock\n");
> + paravirt_set_sched_clock(native_sched_clock);
> +}
> +static DECLARE_DELAYED_WORK(enable_native_sc, enable_native_sc_work);
> +
> +static void enable_native_sched_clock(void)
> +{
> + if (!using_native_sched_clock())
> + schedule_delayed_work(&enable_native_sc, 0);
> +}
> #else
> u64 sched_clock_noinstr(void) __attribute__((alias("native_sched_clock")));
>
> bool using_native_sched_clock(void) { return true; }
> +
> +void enable_native_sched_clock(void) { }
> #endif
>
> notrace u64 sched_clock(void)
> @@ -1157,6 +1172,10 @@ static void tsc_cs_tick_stable(struct clocksource *cs)
> static int tsc_cs_enable(struct clocksource *cs)
> {
> vclocks_set_used(VDSO_CLOCKMODE_TSC);
> +
> + /* Restore native_sched_clock() when switching to TSC */
> + enable_native_sched_clock();
> +
> return 0;
> }
>
Powered by blists - more mailing lists