[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170414113813.vktcpsrsuu2st2fm@hirez.programming.kicks-ass.net>
Date: Fri, 14 Apr 2017 13:38:13 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Lauro Ramos Venancio <lvenanci@...hat.com>
Cc: linux-kernel@...r.kernel.org, lwang@...hat.com, riel@...hat.com,
Mike Galbraith <efault@....de>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [RFC 2/3] sched/topology: fix sched groups on NUMA machines with
mesh topology
On Thu, Apr 13, 2017 at 10:56:08AM -0300, Lauro Ramos Venancio wrote:
> This patch constructs the sched groups from each CPU perspective. So, on
> a 4 nodes machine with ring topology, while nodes 0 and 2 keep the same
> groups as before [(3, 0, 1)(1, 2, 3)], nodes 1 and 3 have new groups
> [(0, 1, 2)(2, 3, 0)]. This allows moving tasks between any node 2-hops
> apart.
Ah,.. so after drawing pictures I see what went wrong; duh :-(
An equivalent patch would be (if for_each_cpu_wrap() were exposed):
@@ -521,11 +588,11 @@ build_overlap_sched_groups(struct sched_domain *sd, int cpu)
struct cpumask *covered = sched_domains_tmpmask;
struct sd_data *sdd = sd->private;
struct sched_domain *sibling;
- int i;
+ int i, wrap;
cpumask_clear(covered);
- for_each_cpu(i, span) {
+ for_each_cpu_wrap(i, span, cpu, wrap) {
struct cpumask *sg_span;
if (cpumask_test_cpu(i, covered))
We need to start iterating at @cpu, not start at 0 every time.
Powered by blists - more mailing lists