lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170424141944.r6vuzxcweae3krz7@hirez.programming.kicks-ass.net>
Date:   Mon, 24 Apr 2017 16:19:44 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Lauro Ramos Venancio <lvenanci@...hat.com>
Cc:     lwang@...hat.com, riel@...hat.com, Mike Galbraith <efault@....de>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/4] sched/topology: the group balance cpu must be a cpu
 where the group is installed

On Mon, Apr 24, 2017 at 03:03:26PM +0200, Peter Zijlstra wrote:

> Also, would it not make sense to re-order patch 2 to come after this,
> such that we _do_ have the group_mask available and don't have to jump
> through hoops in order to link up the sgc? Afaict we don't actually use
> the sgc until the above (reverse) loop computing the CPU capacities.

That is, if I force 4 on without 2, then doesn't something like the
below also do the right thing? (without duplicating part of the magic
already contained in build_group_mask)

---
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -498,13 +498,16 @@ enum s_alloc {
  *
  * This function can only be used when all the groups are already built.
  */
-static void build_group_mask(struct sched_domain *sd, struct sched_group *sg)
+static void
+build_group_mask(struct sched_domain *sd, struct sched_group *sg, struct cpumask *mask)
 {
 	const struct cpumask *sg_span = sched_group_cpus(sg);
 	struct sd_data *sdd = sd->private;
 	struct sched_domain *sibling;
 	int i;
 
+	cpumask_clear(mask);
+
 	for_each_cpu(i, sg_span) {
 		sibling = *per_cpu_ptr(sdd->sd, i);
 
@@ -514,7 +517,7 @@ static void build_group_mask(struct sche
 		if (!cpumask_equal(sg_span, sched_group_cpus(sibling->groups)))
 			continue;
 
-		cpumask_set_cpu(i, sched_group_mask(sg));
+		cpumask_set_cpu(i, mask);
 	}
 }
 
@@ -549,14 +552,19 @@ build_group_from_child_sched_domain(stru
 }
 
 static void init_overlap_sched_group(struct sched_domain *sd,
-				     struct sched_group *sg, int cpu)
+				     struct sched_group *sg)
 {
+	struct cpumask *mask = sched_domains_tmpmask;
 	struct sd_data *sdd = sd->private;
 	struct cpumask *sg_span;
+	int cpu;
+
+	build_group_mask(sd, sg, mask);
+	cpu = cpumask_first_and(sched_group_mask(sg), mask); /* balance cpu */
 
 	sg->sgc = *per_cpu_ptr(sdd->sgc, cpu);
 	if (atomic_inc_return(&sg->sgc->ref) == 1)
-		build_group_mask(sd, sg);
+		cpumask_copy(sched_group_mask(sg), mask);
 
 	/*
 	 * Initialize sgc->capacity such that even if we mess up the

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ