lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090318092243.24787.92087.stgit@sofia.in.ibm.com>
Date:	Wed, 18 Mar 2009 14:52:43 +0530
From:	Gautham R Shenoy <ego@...ibm.com>
To:	"Vaidyanathan Srinivasan" <svaidy@...ux.vnet.ibm.com>,
	"Peter Zijlstra" <a.p.zijlstra@...llo.nl>,
	"Ingo Molnar" <mingo@...e.hu>
Cc:	linux-kernel@...r.kernel.org,
	"Suresh Siddha" <suresh.b.siddha@...el.com>,
	"Balbir Singh" <balbir@...ibm.com>,
	Gautham R Shenoy <ego@...ibm.com>
Subject: [PATCH 3 5/6] sched: Arbitrate the nomination of preferred_wakeup_cpu

Currently for sched_mc/smt_power_savings = 2, we consolidate tasks
by having a preferred_wakeup_cpu which will be used for all the
further wake ups.

This preferred_wakeup_cpu is currently nominated by find_busiest_group()
while loadbalancing for sched_domains which has SD_POWERSAVINGS_BALANCE flag
set.

However, on systems which are multi-threaded and multi-core, we can
have multiple sched_domains in the same hierarchy with
SD_POWERSAVINGS_BALANCE flag set.

Currently we don't have any arbitration mechanism as to while load balancing
for which sched_domain in the hierarchy should find_busiest_group(sd)
nominate the preferred_wakeup_cpu. Hence can overwrite valid nominations
made previously thereby causing the preferred_wakup_cpu to ping-pong
thereby preventing us from effectively consolidating tasks.

Fix this by means of an arbitration algorithm, where in we nominate the
preferred_wakeup_cpu sched_domain in find_busiest_group() for a particular
sched_domain if the sched_domain:
- is the topmost power aware sched_domain.
	OR
- contains the previously nominated preferred wake up cpu in it's span.

This will help to further fine tune the wake-up biasing logic by
identifying a partially busy core within a CPU package instead of
potentially waking up a completely idle core.

Signed-off-by: Gautham R Shenoy <ego@...ibm.com>
---

 kernel/sched.c |   45 +++++++++++++++++++++++++++++++++++++++++++--
 1 files changed, 43 insertions(+), 2 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 16d7655..651550c 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -522,6 +522,14 @@ struct root_domain {
 	 * This is triggered at POWERSAVINGS_BALANCE_WAKEUP(2).
 	 */
 	unsigned int preferred_wakeup_cpu;
+
+	/*
+	 * top_powersavings_sd_lvl records the level of the highest
+	 * sched_domain that has the SD_POWERSAVINGS_BALANCE flag set.
+	 *
+	 * Used to arbitrate nomination of the preferred_wakeup_cpu.
+	 */
+	enum sched_domain_level top_powersavings_sd_lvl;
 #endif
 };
 
@@ -3416,9 +3424,27 @@ out_balanced:
 		goto ret;
 
 	if (this == group_leader && group_leader != group_min) {
+		struct root_domain *my_rd = cpu_rq(this_cpu)->rd;
 		*imbalance = min_load_per_task;
-		if (active_power_savings_level >= POWERSAVINGS_BALANCE_WAKEUP) {
-			cpu_rq(this_cpu)->rd->preferred_wakeup_cpu =
+		/*
+		 * To avoid overwriting of preferred_wakeup_cpu nominations
+		 * while calling find_busiest_group() at various sched_domain
+		 * levels, we define an arbitration mechanism wherein
+		 * find_busiest_group() nominates a preferred_wakeup_cpu at
+		 * the sched_domain sd if:
+		 *
+		 * - sd is the highest sched_domain in the hierarchy having the
+		 *   SD_POWERSAVINGS_BALANCE flag set.
+		 *
+		 *   OR
+		 *
+		 * - sd contains the previously nominated preferred_wakeup_cpu
+		 *   in it's span.
+		 */
+		if (sd->level == my_rd->top_powersavings_sd_lvl ||
+			cpu_isset(my_rd->preferred_wakeup_cpu,
+					*sched_domain_span(sd))) {
+			my_rd->preferred_wakeup_cpu =
 				cpumask_first(sched_group_cpus(group_leader));
 		}
 		return group_min;
@@ -7541,6 +7567,8 @@ static int __build_sched_domains(const struct cpumask *cpu_map,
 	struct root_domain *rd;
 	cpumask_var_t nodemask, this_sibling_map, this_core_map, send_covered,
 		tmpmask;
+	struct sched_domain *sd;
+
 #ifdef CONFIG_NUMA
 	cpumask_var_t domainspan, covered, notcovered;
 	struct sched_group **sched_group_nodes = NULL;
@@ -7816,6 +7844,19 @@ static int __build_sched_domains(const struct cpumask *cpu_map,
 
 	err = 0;
 
+	rd->preferred_wakeup_cpu = UINT_MAX;
+	rd->top_powersavings_sd_lvl = SD_LV_NONE;
+
+	if (active_power_savings_level < POWERSAVINGS_BALANCE_WAKEUP)
+		goto free_tmpmask;
+
+	/* Record the level of the highest power-aware sched_domain */
+	for_each_domain(first_cpu(*cpu_map), sd) {
+		if (!(sd->flags & SD_POWERSAVINGS_BALANCE))
+			continue;
+		rd->top_powersavings_sd_lvl = sd->level;
+	}
+
 free_tmpmask:
 	free_cpumask_var(tmpmask);
 free_send_covered:

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ