lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e01cc082-6332-2a49-9f2f-3a1144f4426d@linux.intel.com>
Date:   Wed, 24 Feb 2021 10:55:47 +0800
From:   "Li, Aubrey" <aubrey.li@...ux.intel.com>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     Aubrey Li <aubrey.li@...el.com>, Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Andi Kleen <ak@...ux.intel.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
        "Rafael J . Wysocki" <rafael.j.wysocki@...el.com>
Subject: Re: [RFC PATCH v1] sched/fair: limit load balance redo times at the
 same sched_domain level

On 2021/2/24 1:33, Vincent Guittot wrote:
> On Tue, 23 Feb 2021 at 06:41, Li, Aubrey <aubrey.li@...ux.intel.com> wrote:
>>
>> Hi Vincent,
>>
>> Sorry for the delay, I just returned from Chinese New Year holiday.
>>
>> On 2021/1/25 22:51, Vincent Guittot wrote:
>>> On Mon, 25 Jan 2021 at 15:00, Li, Aubrey <aubrey.li@...ux.intel.com> wrote:
>>>>
>>>> On 2021/1/25 18:56, Vincent Guittot wrote:
>>>>> On Mon, 25 Jan 2021 at 06:50, Aubrey Li <aubrey.li@...el.com> wrote:
>>>>>>
>>>>>> A long-tail load balance cost is observed on the newly idle path,
>>>>>> this is caused by a race window between the first nr_running check
>>>>>> of the busiest runqueue and its nr_running recheck in detach_tasks.
>>>>>>
>>>>>> Before the busiest runqueue is locked, the tasks on the busiest
>>>>>> runqueue could be pulled by other CPUs and nr_running of the busiest
>>>>>> runqueu becomes 1, this causes detach_tasks breaks with LBF_ALL_PINNED
>>>>>
>>>>> We should better detect that when trying to detach task like below
>>>>
>>>> This should be a compromise from my understanding. If we give up load balance
>>>> this time due to the race condition, we do reduce the load balance cost on the
>>>> newly idle path, but if there is an imbalance indeed at the same sched_domain
>>>
>>> Redo path is there in case, LB has found an imbalance but it can't
>>> move some loads from this busiest rq to dest rq because of some cpu
>>> affinity. So it tries to fix the imbalance by moving load onto another
>>> rq of the group. In your case, the imbalance has disappeared because
>>> it has already been pulled by another rq so you don't have to try to
>>> find another imbalance. And I would even say you should not in order
>>> to let other level to take a chance to spread the load
>>>
>>>> level, we have to wait the next softirq entry to handle that imbalance. This
>>>> means the tasks on the second busiest runqueue have to stay longer, which could
>>>> introduce tail latency as well. That's why I introduced a variable to control
>>>> the redo loops. I'll send this to the benchmark queue to see if it makes any
>>>
>>> TBH, I don't like multiplying the number of knobs
>>
>> Sure, I can take your approach, :)
>>
>>>>>
>>>>> --- a/kernel/sched/fair.c
>>>>> +++ b/kernel/sched/fair.c
>>>>> @@ -7688,6 +7688,16 @@ static int detach_tasks(struct lb_env *env)
>>>>>
>>>>>         lockdep_assert_held(&env->src_rq->lock);
>>>>>
>>>>> +       /*
>>>>> +        * Another CPU has emptied this runqueue in the meantime.
>>>>> +        * Just return and leave the load_balance properly.
>>>>> +        */
>>>>> +       if (env->src_rq->nr_running <= 1 && !env->loop) {
>>
>> May I know why !env->loop is needed here? IIUC, if detach_tasks is invoked
> 
> IIRC,  my point was to do the test only when trying to detach the 1st
> task. A lot of things can happen when a break is involved but TBH I
> can't remember a precise UC. It may be over cautious

When the break happens, rq unlock and local irq restored, so it's still possible
the rq is emptied by another CPU.

> 
>> from LBF_NEED_BREAK, env->loop could be non-zero, but as long as src_rq's
>> nr_running <=1, we should return immediately with LBF_ALL_PINNED flag cleared.
>>
>> How about the following change?
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 04a3ce20da67..1761d33accaa 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -7683,8 +7683,11 @@ static int detach_tasks(struct lb_env *env)
>>                  * We don't want to steal all, otherwise we may be treated likewise,
>>                  * which could at worst lead to a livelock crash.
>>                  */
>> -               if (env->idle != CPU_NOT_IDLE && env->src_rq->nr_running <= 1)
>> +               if (env->idle != CPU_NOT_IDLE && env->src_rq->nr_running <= 1) {
> 
> IMO, we must do the test before:  while (!list_empty(tasks)) {
> 
> because src_rq might have become empty if waiting tasks have been
> pulled by another cpu and the running one became idle in the meantime

Okay, after the running one became idle, it still has LBF_ALL_PINNED, which
needs to be cleared as well. Thanks!

> 
>> +                       /* Clear the flag as we will not test any task */
>> +                       env->flag &= ~LBF_ALL_PINNED;
>>                         break;
>> +               }
>>
>>                 p = list_last_entry(tasks, struct task_struct, se.group_node);
>>
>> Thanks,
>> -Aubrey

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ