[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <e48e75a70ec4a821caa0cf2393a0554d619afdd0.1542757030.git.tim.c.chen@linux.intel.com>
Date: Tue, 20 Nov 2018 16:00:08 -0800
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Jiri Kosina <jikos@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>
Cc: Tim Chen <tim.c.chen@...ux.intel.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Tom Lendacky <thomas.lendacky@....com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
David Woodhouse <dwmw@...zon.co.uk>,
Andi Kleen <ak@...ux.intel.com>,
Dave Hansen <dave.hansen@...el.com>,
Casey Schaufler <casey.schaufler@...el.com>,
Asit Mallick <asit.k.mallick@...el.com>,
Arjan van de Ven <arjan@...ux.intel.com>,
Jon Masters <jcm@...hat.com>,
Waiman Long <longman9394@...il.com>,
Greg KH <gregkh@...uxfoundation.org>,
Dave Stewart <david.c.stewart@...el.com>,
linux-kernel@...r.kernel.org, x86@...nel.org,
stable@...r.kernel.org
Subject: [Patch v6 16/16] x86/smt: Allow disabling of SMT when last SMT is offlined
Currently cpu_use_smt_and_hotplug is only set during boot time
to indicate if SMT is in use.
However, CPU topology may change and when the last SMT thread is offlined,
the SMT code path can be skipped. The sched_smt_present key detects
this condition.
Export sched_smt_present and incorporate it into cpu_use_smt_and_hotplug
to disable SMT code when there are no paired siblings.
Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
---
include/linux/cpu.h | 12 ++++++++++++
kernel/sched/sched.h | 2 --
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 4fef90a..2fc649d 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -100,6 +100,10 @@ static inline void cpu_maps_update_done(void)
#endif /* CONFIG_SMP */
extern struct bus_type cpu_subsys;
+#ifdef CONFIG_SCHED_SMT
+extern struct static_key_false sched_smt_present;
+#endif
+
#ifdef CONFIG_HOTPLUG_CPU
extern void cpus_write_lock(void);
extern void cpus_write_unlock(void);
@@ -172,7 +176,15 @@ static inline void cpuhp_report_idle_dead(void) { }
#if defined(CONFIG_SMP) && defined(CONFIG_HOTPLUG_SMT)
DECLARE_STATIC_KEY_TRUE(cpu_smt_enabled);
+
+#ifdef CONFIG_SCHED_SMT
+#define cpu_use_smt_and_hotplug \
+ (static_branch_likely(&cpu_smt_enabled) && \
+ static_branch_unlikely(&sched_smt_present))
+#else
#define cpu_use_smt_and_hotplug (static_branch_likely(&cpu_smt_enabled))
+#endif
+
extern void cpu_smt_disable(bool force);
extern void cpu_smt_check_topology_early(void);
extern void cpu_smt_check_topology(void);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 618577f..e1e3f09 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -937,8 +937,6 @@ static inline int cpu_of(struct rq *rq)
#ifdef CONFIG_SCHED_SMT
-extern struct static_key_false sched_smt_present;
-
extern void __update_idle_core(struct rq *rq);
static inline void update_idle_core(struct rq *rq)
--
2.9.4
Powered by blists - more mailing lists