lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtC-iAewnR3QV1BJNWq-hJQphjs_eUwy=PxBrw-NuU3g_w@mail.gmail.com>
Date:   Fri, 13 Dec 2019 10:21:54 +0100
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Peng Wang <rocking@...ux.alibaba.com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] schied/fair: Skip calculating @contrib without load

Hi Peng,

minor typo on the subject s/schied/sched/

On Fri, 13 Dec 2019 at 04:46, Peng Wang <rocking@...ux.alibaba.com> wrote:
>
> Because of the:
>
>         if (!load)
>                 runnable = running = 0;
>
> clause in ___update_load_sum(), all the actual users of @contrib in
> accumulate_sum():
>
>         if (load)
>                 sa->load_sum += load * contrib;
>         if (runnable)
>                 sa->runnable_load_sum += runnable * contrib;
>         if (running)
>                 sa->util_sum += contrib << SCHED_CAPACITY_SHIFT;
>
> don't happen, and therefore we don't care what @contrib actually is and
> calculating it is pointless.
>
> If we count the times when @load equals zero and not as below:
>
>         if (load) {
>                 load_is_not_zero_count++;
>                 contrib = __accumulate_pelt_segments(periods,
>                                 1024 - sa->period_contrib,delta);
>         } else
>                 load_is_zero_count++;
>
> As we can see, load_is_zero_count is much bigger than
> load_is_zero_count, and the gap is gradually widening:
>
>         load_is_zero_count:            6016044 times
>         load_is_not_zero_count:         244316 times
>         19:50:43 up 1 min,  1 user,  load average: 0.09, 0.06, 0.02
>
>         load_is_zero_count:            7956168 times
>         load_is_not_zero_count:         261472 times
>         19:51:42 up 2 min,  1 user,  load average: 0.03, 0.05, 0.01
>
>         load_is_zero_count:           10199896 times
>         load_is_not_zero_count:         278364 times
>         19:52:51 up 3 min,  1 user,  load average: 0.06, 0.05, 0.01
>
>         load_is_zero_count:           14333700 times
>         load_is_not_zero_count:         318424 times
>         19:54:53 up 5 min,  1 user,  load average: 0.01, 0.03, 0.00

your system looks pretty idle so i'm not sure to see a benefit of your
patch in such situation

>
> Perhaps we can gain some performance advantage by saving these
> unnecessary calculation.

load == 0 when
- system is idle and we updates blocked load but we don't really care
about performance in this case and we will stop the trigger the update
once the load_avg reach null value
- a rt/dl/cfs rq or a sched_entity wakes up. In this case, skipping
the calculation of the contribution should fasten the wake up path
although i'm not sure about the amount

Nevertheless, this change makes sense

Reviewed-by: Vincent Guittot < vincent.guittot@...aro.org>

>
> Signed-off-by: Peng Wang <rocking@...ux.alibaba.com>
> ---
>  kernel/sched/pelt.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
> index a96db50..4392953 100644
> --- a/kernel/sched/pelt.c
> +++ b/kernel/sched/pelt.c
> @@ -129,8 +129,9 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3)
>                  * Step 2
>                  */
>                 delta %= 1024;
> -               contrib = __accumulate_pelt_segments(periods,
> -                               1024 - sa->period_contrib, delta);
> +               if (load)
> +                       contrib = __accumulate_pelt_segments(periods,
> +                                       1024 - sa->period_contrib, delta);
>         }
>         sa->period_contrib = delta;
>
> --
> 1.8.3.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ