[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181129144256.GI32259@char.us.oracle.com>
Date: Thu, 29 Nov 2018 09:42:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>, x86@...nel.org,
Peter Zijlstra <peterz@...radead.org>,
Andy Lutomirski <luto@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Jiri Kosina <jkosina@...e.cz>,
Tom Lendacky <thomas.lendacky@....com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
David Woodhouse <dwmw@...zon.co.uk>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>,
Dave Hansen <dave.hansen@...el.com>,
Casey Schaufler <casey.schaufler@...el.com>,
Asit Mallick <asit.k.mallick@...el.com>,
Arjan van de Ven <arjan@...ux.intel.com>,
Jon Masters <jcm@...hat.com>,
Waiman Long <longman9394@...il.com>,
Greg KH <gregkh@...uxfoundation.org>,
Dave Stewart <david.c.stewart@...el.com>,
Kees Cook <keescook@...omium.org>
Subject: Re: [patch V2 08/28] sched/smt: Make sched_smt_present track topology
On Sun, Nov 25, 2018 at 07:33:36PM +0100, Thomas Gleixner wrote:
> Currently the 'sched_smt_present' static key is enabled when at CPU bringup
> SMT topology is observed, but it is never disabled. However there is demand
> to also disable the key when the topology changes such that there is no SMT
> present anymore.
>
> Implement this by making the key count the number of cores that have SMT
> enabled.
>
> In particular, the SMT topology bits are set before interrrupts are enabled
> and similarly, are cleared after interrupts are disabled for the last time
> and the CPU dies.
I see that the number you used is '2', but I thought that there are some
CPUs out there (Knights Landing?) that could have four threads?
Would it be better to have a generic function that would provide the
amount of threads the platform does expose - and use that instead
of a constant value?
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
>
> ---
> kernel/sched/core.c | 19 +++++++++++--------
> 1 file changed, 11 insertions(+), 8 deletions(-)
>
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -5738,15 +5738,10 @@ int sched_cpu_activate(unsigned int cpu)
>
> #ifdef CONFIG_SCHED_SMT
> /*
> - * The sched_smt_present static key needs to be evaluated on every
> - * hotplug event because at boot time SMT might be disabled when
> - * the number of booted CPUs is limited.
> - *
> - * If then later a sibling gets hotplugged, then the key would stay
> - * off and SMT scheduling would never be functional.
> + * When going up, increment the number of cores with SMT present.
> */
> - if (cpumask_weight(cpu_smt_mask(cpu)) > 1)
> - static_branch_enable_cpuslocked(&sched_smt_present);
> + if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
> + static_branch_inc_cpuslocked(&sched_smt_present);
> #endif
> set_cpu_active(cpu, true);
>
> @@ -5790,6 +5785,14 @@ int sched_cpu_deactivate(unsigned int cp
> */
> synchronize_rcu_mult(call_rcu, call_rcu_sched);
>
> +#ifdef CONFIG_SCHED_SMT
> + /*
> + * When going down, decrement the number of cores with SMT present.
> + */
> + if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
> + static_branch_dec_cpuslocked(&sched_smt_present);
> +#endif
> +
> if (!sched_smp_initialized)
> return 0;
>
>
>
Powered by blists - more mailing lists