[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1363706483.22553.67.camel@laptop>
Date: Tue, 19 Mar 2013 16:21:23 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Ingo Molnar <mingo@...hat.com>,
Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 8/8] sched: reset lb_env when redo in load_balance()
On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> Commit 88b8dac0 makes load_balance() consider other cpus in its group.
> So, now, When we redo in load_balance(), we should reset some fields of
> lb_env to ensure that load_balance() works for initial cpu, not for other
> cpus in its group. So correct it.
>
> Cc: Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 70631e8..25c798c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5014,14 +5014,20 @@ static int load_balance(int this_cpu, struct rq *this_rq,
>
> struct lb_env env = {
> .sd = sd,
> - .dst_cpu = this_cpu,
> - .dst_rq = this_rq,
> .dst_grpmask = dst_grp,
> .idle = idle,
> - .loop_break = sched_nr_migrate_break,
> .cpus = cpus,
> };
>
> + schedstat_inc(sd, lb_count[idle]);
> + cpumask_copy(cpus, cpu_active_mask);
> +
> +redo:
> + env.dst_cpu = this_cpu;
> + env.dst_rq = this_rq;
> + env.loop = 0;
> + env.loop_break = sched_nr_migrate_break;
> +
> /* For NEWLY_IDLE load_balancing, we don't need to consider
> * other cpus in our group */
> if (idle == CPU_NEWLY_IDLE) {
OK, so this is the case where we tried to balance !this_cpu and found
ALL_PINNED, right?
You can only get here in very weird cases where people love their
sched_setaffinity() waaaaay too much, do we care? Why not give up?
Also, looking at this, shouldn't we consider env->cpus in
can_migrate_task() where we compute new_dst_cpu?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists