[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170425151200.fcbdovlhu6l5efmn@hirez.programming.kicks-ass.net>
Date: Tue, 25 Apr 2017 17:12:00 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Lauro Venancio <lvenanci@...hat.com>
Cc: lwang@...hat.com, riel@...hat.com, Mike Galbraith <efault@....de>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/4] sched/topology: the group balance cpu must be a cpu
where the group is installed
On Tue, Apr 25, 2017 at 11:33:51AM -0300, Lauro Venancio wrote:
> On 04/25/2017 09:17 AM, Peter Zijlstra wrote:
> > With the fact that sched_group_cpus(sd->groups) ==
> > sched_domain_span(sibling->child) (if child exists) established in the
> > previous patches, could we not simplify this like the below?
> We can. We just need to better handle the case when there is no child or
> we will have empty masks.
> We have to replicate the build_group_from_child_sched_domain() behavior:
>
> if (sd->child)
> cpumask_copy(sg_span, sched_domain_span(sd->child));
> else
> cpumask_copy(sg_span, sched_domain_span(sd));
>
>
> So we need something like:
>
>
> if (sibling->child)
> gsd = sibling->child;
> else
> gsd = sibling;
>
> if (!cpumask_equal(sg_span, sched_domain_span(gsd)))
>
> continue;
Right, ran into that already. My truncated topologies (single cpu per
node) insta triggered it. But somehow removing the WARN was sufficient
and the mask didn't end up empty.
This is the ring topo:
[ 0.086772] CPU0 attaching sched-domain:
[ 0.087005] domain 0: span 0-1,3 level NUMA
[ 0.088002] groups: 0 (mask: 0), 1, 3
[ 0.089002] domain 1: span 0-3 level NUMA
[ 0.090002] groups: 0-1,3 (mask: 0) (cpu_capacity: 3072), 1-3 (cpu_capacity: 3072)
[ 0.091005] CPU1 attaching sched-domain:
[ 0.092003] domain 0: span 0-2 level NUMA
[ 0.093002] groups: 1 (mask: 1), 2, 0
[ 0.094002] domain 1: span 0-3 level NUMA
[ 0.095002] groups: 0-2 (mask: 1) (cpu_capacity: 3072), 0,2-3 (cpu_capacity: 3072)
[ 0.096005] CPU2 attaching sched-domain:
[ 0.097002] domain 0: span 1-3 level NUMA
[ 0.098002] groups: 2 (mask: 2), 3, 1
[ 0.099002] domain 1: span 0-3 level NUMA
[ 0.100002] groups: 1-3 (mask: 2) (cpu_capacity: 3072), 0-1,3 (cpu_capacity: 3072)
[ 0.101004] CPU3 attaching sched-domain:
[ 0.102002] domain 0: span 0,2-3 level NUMA
[ 0.103002] groups: 3 (mask: 3), 0, 2
[ 0.104002] domain 1: span 0-3 level NUMA
[ 0.105002] groups: 0,2-3 (mask: 3) (cpu_capacity: 3072), 0-2 (cpu_capacity: 3072)
See how the domain-0 mask isn't empty.
That said, when !child, ->groups ends up being a single cpu.
So I was thinking:
struct cpumask *i_span;
if (sibling->child)
i_span = sched_domain_span(sibling->child);
else
i_span = cpumask_of(i);
if (!cpumask_equal(sg_span, i_span))
continue;
But I'll first try and figure out why I'm not having empty masks.
Powered by blists - more mailing lists