[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a89cac88-2175-aee7-5307-928fbd6bd072@cn.fujitsu.com>
Date: Mon, 13 Nov 2017 11:52:14 +0800
From: Dou Liyang <douly.fnst@...fujitsu.com>
To: Pavel Tatashin <pasha.tatashin@...cle.com>,
<steven.sistare@...cle.com>, <daniel.m.jordan@...cle.com>,
<linux@...linux.org.uk>, <schwidefsky@...ibm.com>,
<heiko.carstens@...ibm.com>, <john.stultz@...aro.org>,
<sboyd@...eaurora.org>, <x86@...nel.org>,
<linux-kernel@...r.kernel.org>, <mingo@...hat.com>,
<tglx@...utronix.de>, <hpa@...or.com>
Subject: Re: [PATCH v8 1/6] x86/tsc: remove tsc_disabled flag
Hi Pavel,
At 11/09/2017 11:01 AM, Pavel Tatashin wrote:
> tsc_disabled is set when notsc is passed as kernel parameter. The reason we
> have notsc is to avoid timing problems on multi-socket systems. We already
> have a mechanism, however, to detect and resolve these issues by invoking
> tsc unstable path. Thus, make notsc to behave the same as tsc=unstable.
>
> Signed-off-by: Pavel Tatashin <pasha.tatashin@...cle.com>
I am not sure if I could add the signature.
Anyway, it looks good to me.
Reviewed-by: Dou Liyang <douly.fnst@...fujitsu.com>
> ---
> arch/x86/kernel/tsc.c | 19 +++----------------
> 1 file changed, 3 insertions(+), 16 deletions(-)
>
> diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
> index 97907e152356..dbce6fa32aa9 100644
> --- a/arch/x86/kernel/tsc.c
> +++ b/arch/x86/kernel/tsc.c
> @@ -37,11 +37,6 @@ EXPORT_SYMBOL(tsc_khz);
> */
> static int __read_mostly tsc_unstable;
>
> -/* native_sched_clock() is called before tsc_init(), so
> - we must start with the TSC soft disabled to prevent
> - erroneous rdtsc usage on !boot_cpu_has(X86_FEATURE_TSC) processors */
> -static int __read_mostly tsc_disabled = -1;
> -
> static DEFINE_STATIC_KEY_FALSE(__use_tsc);
>
> int tsc_clocksource_reliable;
> @@ -247,8 +242,7 @@ EXPORT_SYMBOL_GPL(check_tsc_unstable);
> #ifdef CONFIG_X86_TSC
> int __init notsc_setup(char *str)
> {
> - pr_warn("Kernel compiled with CONFIG_X86_TSC, cannot disable TSC completely\n");
> - tsc_disabled = 1;
> + mark_tsc_unstable("boot parameter notsc");
> return 1;
> }
> #else
> @@ -1229,7 +1223,7 @@ static void tsc_refine_calibration_work(struct work_struct *work)
>
> static int __init init_tsc_clocksource(void)
> {
> - if (!boot_cpu_has(X86_FEATURE_TSC) || tsc_disabled > 0 || !tsc_khz)
> + if (!boot_cpu_has(X86_FEATURE_TSC) || !tsc_khz)
> return 0;
>
> if (tsc_clocksource_reliable)
> @@ -1330,12 +1324,6 @@ void __init tsc_init(void)
> set_cyc2ns_scale(tsc_khz, cpu, cyc);
> }
>
> - if (tsc_disabled > 0)
> - return;
> -
> - /* now allow native_sched_clock() to use rdtsc */
> -
> - tsc_disabled = 0;
> static_branch_enable(&__use_tsc);
>
> if (!no_sched_irq_time)
> @@ -1365,10 +1353,9 @@ void __init tsc_init(void)
> unsigned long calibrate_delay_is_known(void)
> {
> int sibling, cpu = smp_processor_id();
> - int constant_tsc = cpu_has(&cpu_data(cpu), X86_FEATURE_CONSTANT_TSC);
> const struct cpumask *mask = topology_core_cpumask(cpu);
>
> - if (tsc_disabled || !constant_tsc || !mask)
> + if (!cpu_has(&cpu_data(cpu), X86_FEATURE_CONSTANT_TSC) || !mask)
> return 0;
>
> sibling = cpumask_any_but(mask, cpu);
>
Powered by blists - more mailing lists