[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170428135339.diwcabxhcpu4b5fw@hirez.programming.kicks-ass.net>
Date: Fri, 28 Apr 2017 15:53:39 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: mingo@...nel.org, lvenanci@...hat.com
Cc: lwang@...hat.com, riel@...hat.com, efault@....de,
tglx@...utronix.de, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 00/14] sched/topology fixes
On Fri, Apr 28, 2017 at 03:19:58PM +0200, Peter Zijlstra wrote:
> Hi,
>
> These patches are based upon the hard work of Lauro. He put in the time and
> effort to understand and debug the code.
>
> So while I didn't take many of his actual patches; I want to thank him for
> doing the work. Hopefully the "Debugged-by:" tag conveys some of that.
>
> In any case, please have a look. I think these should about cover things.
>
> Rik, Lauro, could you guys in particular look at the final patch that adds a
> few comments. I attempted to document the intent and understanding there. But
> given I've been staring at this stuff too long I could've missed the obvious.
>
> Comments and or suggestions welcome.
>
Also, the following occurred to me:
sg_span & sg_mask == sg_mask
Therefore, we don't need to do the whole "sg_span &" business.
Hmm?
---
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7996,7 +7996,7 @@ static int active_load_balance_cpu_stop(
static int should_we_balance(struct lb_env *env)
{
struct sched_group *sg = env->sd->groups;
- struct cpumask *sg_cpus, *sg_mask;
+ struct cpumask *sg_mask;
int cpu, balance_cpu = -1;
/*
@@ -8006,11 +8006,10 @@ static int should_we_balance(struct lb_e
if (env->idle == CPU_NEWLY_IDLE)
return 1;
- sg_cpus = sched_group_cpus(sg);
sg_mask = sched_group_mask(sg);
/* Try to find first idle cpu */
- for_each_cpu_and(cpu, sg_cpus, env->cpus) {
- if (!cpumask_test_cpu(cpu, sg_mask) || !idle_cpu(cpu))
+ for_each_cpu_and(cpu, sg_mask, env->cpus) {
+ if (!idle_cpu(cpu))
continue;
balance_cpu = cpu;
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -85,7 +85,8 @@ static int sched_domain_debug_one(struct
group->sgc->id,
cpumask_pr_args(sched_group_cpus(group)));
- if ((sd->flags & SD_OVERLAP) && !cpumask_full(sched_group_mask(group))) {
+ if ((sd->flags & SD_OVERLAP) &&
+ !cpumask_equal(sched_group_mask(group), sched_group_cpus(group))) {
printk(KERN_CONT " mask=%*pbl",
cpumask_pr_args(sched_group_mask(group)));
}
@@ -505,7 +506,7 @@ enum s_alloc {
*/
int group_balance_cpu(struct sched_group *sg)
{
- return cpumask_first_and(sched_group_cpus(sg), sched_group_mask(sg));
+ return cpumask_first(sched_group_mask(sg));
}
@@ -856,7 +857,7 @@ build_sched_groups(struct sched_domain *
continue;
group = get_group(i, sdd, &sg);
- cpumask_setall(sched_group_mask(sg));
+ cpumask_copy(sched_group_mask(sg), sched_group_cpus(sg));
for_each_cpu(j, span) {
if (get_group(j, sdd, NULL) != group)
Powered by blists - more mailing lists