[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160331114437.GA28591@linux.vnet.ibm.com>
Date: Thu, 31 Mar 2016 17:14:37 +0530
From: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org,
Gautham R Shenoy <ego@...ux.vnet.ibm.com>,
Michael Neuling <mikey@...ling.org>,
Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
tim.c.chen@...ux.intel.com
Subject: Re: [PATCH 1/3] sched/fair: Fix asym packing to select correct cpu
* Peter Zijlstra <peterz@...radead.org> [2016-03-29 14:19:24]:
> On Wed, Mar 23, 2016 at 05:04:40PM +0530, Srikar Dronamraju wrote:
> > If asymmetric packing is used when target cpu is busy,
> > update_sd_pick_busiest(), can select a lightly loaded cpu.
> > find_busiest_group() has checks to ensure asym packing is only used
> > when target cpu is not busy. However it may not be able to avoid a
> > lightly loaded cpu selected by update_sd_pick_busiest from being
> > selected as source cpu for eventual load balancing.
>
> So my brain completely fails to parse. What? Why?
Lets think of a situation where there are 4 cpus in a core with the core
having ASYM PACKING ability. If the tasks on each cpus are loaded such
that cpu0 has 1, cpu1 has 2, cpu3 has 3 and cpu4 has 2 threads. (Assume all
threads having equal load contributions.)
Now with the current unpatched code, with cpu0 running the load
balancing, its not guaranteed to pick cpu2 (which is the busiest).
Here is what happens.
1. update_sd_lb_stats() (with help of update_sd_pick_busiest() may pick
cpu1 as the sds.busiest.
2. check_asym_packing will return false.
3. find_busiest_group will still continue with cpu1 as sds.busiest and
is able to do load balancing.
After the load balance, the cpu0 will have 2, cpu1 will have 1, cpu2
will have 3 and cpu3 will have 2. So because of using asym packing in
update_sd_pick_busiest() when cpus are busy, we may not select the
busiest cpu resulting in the load balance not be balanced immediately.
Eventually, we will be able to load balance all cpus would have 2
threads.
Now with the patched code, it will picked cpu2 and after load
balance all cpus should end up with 2 threads after the first load
balance itself.
> > if (!(env->sd->flags & SD_ASYM_PACKING))
> > return true;
> >
> > + if (env->idle == CPU_NOT_IDLE)
> > + return true;
>
> OK, so this matches check_asym_packing() and makes sense, we don't want
> to pull work if we're not idle.
>
> But please add a comment with the condition, its always hard to remember
> the intent of these things later.
Okay will do.
> > /*
> > * ASYM_PACKING needs to move all the work to the lowest
> > * numbered CPUs in the group, therefore mark all groups
> > @@ -6526,7 +6528,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
> > if (!sds->busiest)
> > return true;
> >
> > - if (group_first_cpu(sds->busiest) > group_first_cpu(sg))
> > + if (group_first_cpu(sds->busiest) < group_first_cpu(sg))
> > return true;
> > }
>
> Right, so you want to start by moving the highest possible cpu's work
> down. The end result is the same, but this way you can reach lower power
> states quicker.
Yes, The Asym packing was suppose to move the highest possible cpus work
down so this check in a way was defeating the purpose.
>
> Again, please add a comment.
Okay.
> > @@ -6864,8 +6869,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
> > busiest = &sds.busiest_stat;
> >
> > /* ASYM feature bypasses nice load balance check */
> > - if ((env->idle == CPU_IDLE || env->idle == CPU_NEWLY_IDLE) &&
> > - check_asym_packing(env, &sds))
> > + if (check_asym_packing(env, &sds))
> > return sds.busiest;
> >
> > /* There is no busy sibling group to pull tasks from */
>
> OK, this is an effective NOP but results in cleaner code.
Yes, this is a nop.
>
--
Thanks and Regards
Srikar Dronamraju
Powered by blists - more mailing lists