lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 20 Apr 2017 16:51:41 -0300
From:   Lauro Ramos Venancio <lvenanci@...hat.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     lwang@...hat.com, riel@...hat.com, Mike Galbraith <efault@....de>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org,
        Lauro Ramos Venancio <lvenanci@...hat.com>
Subject: [PATCH 2/4] sched/topology: all instances of a sched group must use the same sched_group_capacity

Use the group balance cpu to select the same sched_group_capacity
instance for all instances of a sched group.

As the group mask is stored in the struct sched_group_capacity and the
function group_balance_cpu() cannot be used when the group mask is not
available, this patch creates a function to find the group balance cpu
when the mask is not available.

Signed-off-by: Lauro Ramos Venancio <lvenanci@...hat.com>
---
 kernel/sched/topology.c | 26 ++++++++++++++++++++++++--
 1 file changed, 24 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index f8b53b3..55bbaf7 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -529,6 +529,27 @@ int group_balance_cpu(struct sched_group *sg)
 	return cpumask_first_and(sched_group_cpus(sg), sched_group_mask(sg));
 }
 
+/*
+ * Find the group balance cpu when the group mask is not available yet.
+ */
+static int find_group_balance_cpu(struct sched_domain *sd,
+				  struct sched_group *sg)
+{
+	const struct cpumask *sg_span = sched_group_cpus(sg);
+	struct sd_data *sdd = sd->private;
+	struct sched_domain *sibling;
+	int i;
+
+	for_each_cpu(i, sg_span) {
+		sibling = *per_cpu_ptr(sdd->sd, i);
+		if (cpumask_test_cpu(i, sched_domain_span(sibling)))
+			return i;
+	}
+
+	WARN(1, "group balance cpu not found.");
+	return 0;
+}
+
 static struct sched_group *
 build_group_from_child_sched_domain(struct sched_domain *sd, int cpu)
 {
@@ -551,10 +572,11 @@ int group_balance_cpu(struct sched_group *sg)
 }
 
 static void init_overlap_sched_group(struct sched_domain *sd,
-				     struct sched_group *sg, int cpu)
+				     struct sched_group *sg)
 {
 	struct sd_data *sdd = sd->private;
 	struct cpumask *sg_span;
+	int cpu = find_group_balance_cpu(sd, sg);
 
 	sg->sgc = *per_cpu_ptr(sdd->sgc, cpu);
 	if (atomic_inc_return(&sg->sgc->ref) == 1)
@@ -601,7 +623,7 @@ static void init_overlap_sched_group(struct sched_domain *sd,
 		sg_span = sched_group_cpus(sg);
 		cpumask_or(covered, covered, sg_span);
 
-		init_overlap_sched_group(sd, sg, i);
+		init_overlap_sched_group(sd, sg);
 
 		if (!first)
 			first = sg;
-- 
1.8.3.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ