lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtA8-Z=+ibzKgYXqf2CLemjB0h=TYSB_5Z4YJvAGTJPPHg@mail.gmail.com>
Date:   Thu, 27 Apr 2017 09:00:48 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Tejun Heo <tj@...nel.org>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>,
        Chris Mason <clm@...com>, kernel-team@...com
Subject: Re: [PATCH 1/2] sched/fair: Fix how load gets propagated from cfs_rq
 to its sched_entity

On 27 April 2017 at 00:40, Tejun Heo <tj@...nel.org> wrote:
> Hello,
>
> On Wed, Apr 26, 2017 at 06:51:23PM +0200, Vincent Guittot wrote:
>> > It's not temporary.  The weight of a group is its shares, which is its
>> > load fraction of the configured weight of the group.  Assuming UP, if
>> > you configure a group to the weight of 1024 and have any task running
>> > full-tilt in it, the group will converge to the load of 1024.  The
>> > problem is that the propagation logic is currently doing something
>> > completely different and temporarily push down the load whenever it
>> > triggers.
>>
>> Ok, I see your point and agree that there is an issue when propagating
>> load_avg of a task group which has tasks with lower weight than the share
>> but your proposal has got issue because it uses runnable_load_avg instead
>> of load_avg and this makes propagation of loadavg_avg incorrect, something
>> like below which keeps using load_avg solve the problem
>>
>> +     if (gcfs_rq->load.weight) {
>> +             long shares = scale_load_down(calc_cfs_shares(gcfs_rq, gcfs_rq->tg));
>> +
>> +             load = min(gcfs_rq->avg.load_avg *
>> +                        shares / scale_load_down(gcfs_rq->load.weight), shares);
>>
>> I have run schbench with the change above on v4.11-rc8 and latency are ok
>
> Hmm... so, I'll test this but this wouldn't solve the problem of
> root's runnable_load_avg being out of sync with the approximate sum of
> all task loads, which is the cause of the latencies that I'm seeing.
>
> Are you saying that with the above change, you're not seeing the
> higher latency issue that you reported in the other reply?

yes I don't have any latency regression like v4.11-rc8 with the above
change that uses load_avg but fix the propagation for of a task with a
lower weight than task group share.

>
> Thanks.
>
> --
> tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ