[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e75332dd-cd10-a08a-e80d-f87ac5458c2a@arm.com>
Date: Tue, 23 May 2017 12:45:54 +0100
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: "Christ, Austin" <austinwc@...eaurora.org>,
Jeffrey Hugo <jhugo@...eaurora.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org
Cc: Tyler Baicar <tbaicar@...eaurora.org>,
Timur Tabi <timur@...eaurora.org>
Subject: Re: [PATCH V3 1/2] sched/fair: Fix load_balance() affinity redo path
Hey Austin,
On 22/05/17 20:57, Christ, Austin wrote:
> Hey Dietmar,
>
>
> On 5/22/2017 3:48 AM, Dietmar Eggemann wrote:
>> On 19/05/17 14:31, Dietmar Eggemann wrote:
>>> On 18/05/17 20:36, Jeffrey Hugo wrote:
>>>
>>> [...]
>>>
>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>> index d711093..a5d41b1 100644
>>>> --- a/kernel/sched/fair.c
>>>> +++ b/kernel/sched/fair.c
>>>> @@ -8220,7 +8220,24 @@ static int load_balance(int this_cpu, struct
>>>> rq *this_rq,
>>>> /* All tasks on this runqueue were pinned by CPU affinity */
>>>> if (unlikely(env.flags & LBF_ALL_PINNED)) {
>>>> cpumask_clear_cpu(cpu_of(busiest), cpus);
>>>> - if (!cpumask_empty(cpus)) {
>>>> + /*
>>>> + * dst_cpu is not a valid busiest cpu in the following
>>>> + * check since load cannot be pulled from dst_cpu to be
>>>> + * put on dst_cpu.
>>>> + */
>>>> + cpumask_clear_cpu(env.dst_cpu, cpus);
>>>> + /*
>>>> + * Go back to "redo" iff the load-balance cpumask
>>>> + * contains other potential busiest cpus for the
>>>> + * current sched domain.
>>>> + */
>>>> + if (cpumask_intersects(cpus, sched_domain_span(env.sd))) {
>>>> + /*
>>>> + * Now that the check has passed, reenable
>>>> + * dst_cpu so that load can be calculated on
>>>> + * it in the redo path.
>>>> + */
>>>> + cpumask_set_cpu(env.dst_cpu, cpus);
>>> IMHO, this will work nicely and its way easier.
>> This was too quick ... if we still have other potential dst cpus
>> available and cpu_of(busiest) is the latest src cpu then this will fail.
>>
>> It does work on sd with 'group_weight == 1', e.g. your MC sd 'sd->child
>> == NULL'.
>>
>> But IMHO 'group_imbalance' propagation has to work on higher sd levels
>> as well.
> Can you clarify the fail case you are seeing? We are only aware of
> dst_cpu being changed under [1] where a dst_cpu will try to move work to
> one of its sched_group siblings.
>
> I'm also not entirely sure I understand what you mean about the flag
> being propagated to higher sd levels.
The propagation of 'imbalance' information should not only happen
between lowest sd (sd->child == NULL) and its parent (MC->DIE in your
example) but between all {sd, sd->parent} pairs.
Imagine your machine had another sd on top of DIE.
I recreated the issue I pointed out on my hikey board (2*4) (w/o this
extra sd on top of DIE), hotplug-ed out cpu 2,3,6,7 so I have a system
with the following DIE sched_groups (sg):
sg1(0,1) and sg2(4,5) <- the DIE level sg's contain more than 1 logical cpu.
As a workload I run 4 25% tasks affine to [0,1]. These tasks are
'SOURCE' PINNED for a DIE lb sg2<-sg1.
With:
if (unlikely(env.flags & LBF_ALL_PINNED)) {
cpumask_clear_cpu(cpu_of(busiest), cpus);
if (!cpumask_empty(cpus)) {
...
printk("goto redo: sd=%s dst_cpu=%d src_cpu=%d cpus=%*pbl
dst_grpmask=%*pbl\n",
sd->name, env.dst_cpu, cpu_of(busiest),
cpumask_pr_args(cpus),
cpumask_pr_args(env.dst_grpmask));
goto redo;
}
While running the workload I sometimes get:
...
goto redo: sd=DIE dst_cpu=4 src_cpu=1 cpus=0,4-5 dst_grpmask=4-5
goto redo: sd=DIE dst_cpu=4 src_cpu=0 cpus=4-5 dst_grpmask=4-5
...
So even though 'redo' handling has tried both possible src_cpu's we
would still enter another 'redo' path even you remove dst_cpu=4
temporarily because of cpu=5.
You could replace:
cpumask_clear_cpu(env.dst_cpu, cpus)
cpumask_set_cpu(env.dst_cpu, cpus)
with
cpumask_andnot(cpus, cpus, env.dst_grpmask)
cpumask_or(cpus, cpus, env.dst_grpmask)
but then env.dst_grpmask can't be set to NULL for CPU_NEWLY_IDLE and
you're almost at the snippet I sent out for v1:
https://marc.info/?l=linux-kernel&m=149486020010389&w=2
[...]
Powered by blists - more mailing lists