[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1364898582.18374.17.camel@laptop>
Date: Tue, 02 Apr 2013 12:29:42 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>,
Alex Shi <alex.shi@...el.com>,
Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Morten Rasmussen <morten.rasmussen@....com>,
Namhyung Kim <namhyung@...nel.org>
Subject: Re: [PATCH 2/5] sched: factor out code to should_we_balance()
On Tue, 2013-04-02 at 12:00 +0200, Peter Zijlstra wrote:
> On Tue, 2013-04-02 at 18:50 +0900, Joonsoo Kim wrote:
> >
> > It seems that there is some misunderstanding about this patch.
> > In this patch, we don't iterate all groups. Instead, we iterate on
> > cpus of local sched_group only. So there is no penalty you mentioned.
>
> OK, I'll go stare at it again..
Ah, I see, you're doing should_we_balance() _before_
find_busiest_group() and instead you're doing another for_each_cpu() in
there.
I'd write the thing like:
static bool should_we_balance(struct lb_env *env)
{
struct sched_group *sg = env->sd->groups;
struct cpumask *sg_cpus, *sg_mask;
int cpu, balance_cpu = -1;
if (env->idle == CPU_NEWLY_IDLE)
return true;
sg_cpus = sched_group_cpus(sg);
sg_mask = sched_group_mask(sg);
for_each_cpu_and(cpu, sg_cpus, env->cpus) {
if (!cpumask_test_cpu(cpu, sg_mask))
continue;
if (!idle_cpu(cpu))
continue;
balance_cpu = cpu;
break;
}
if (balance_cpu == -1)
balance_cpu = group_balance_cpu(sg);
return balance_cpu == env->dst_cpu;
}
I also considered doing the group_balance_cpu() first to avoid having
to do the idle_cpu() scan, but that's a slight behavioural change
afaict.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists