lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 26 Apr 2019 09:08:29 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Ingo Molnar <mingo@...nel.org>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Thara Gopinath <thara.gopinath@...aro.org>,
        Ingo Molnar <mingo@...hat.com>,
        Zhang Rui <rui.zhang@...el.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Amit Kachhap <amit.kachhap@...il.com>,
        viresh kumar <viresh.kumar@...aro.org>,
        Javi Merino <javi.merino@...nel.org>,
        Eduardo Valentin <edubezval@...il.com>,
        Daniel Lezcano <daniel.lezcano@...aro.org>,
        Nicolas Dechesne <nicolas.dechesne@...aro.org>,
        Bjorn Andersson <bjorn.andersson@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Quentin Perret <quentin.perret@....com>,
        "Rafael J. Wysocki" <rjw@...ysocki.net>
Subject: Re: [PATCH V2 0/3] Introduce Thermal Pressure

On Thu, 25 Apr 2019 at 19:44, Ingo Molnar <mingo@...nel.org> wrote:
>
>
> * Ingo Molnar <mingo@...nel.org> wrote:
>
> >
> > * Peter Zijlstra <peterz@...radead.org> wrote:
> >
> > > On Wed, Apr 17, 2019 at 08:29:32PM +0200, Ingo Molnar wrote:
> > > > Assuming PeterZ & Rafael & Quentin doesn't hate the whole thermal load
> > > > tracking approach.
> > >
> > > I seem to remember competing proposals, and have forgotten everything
> > > about them; the cover letter also didn't have references to them or
> > > mention them in any way.
> > >
> > > As to the averaging and period, I personally prefer a PELT signal with
> > > the windows lined up, if that really is too short a window, then a PELT
> > > like signal with a natural multiple of the PELT period would make sense,
> > > such that the windows still line up nicely.
> > >
> > > Mixing different averaging methods and non-aligned windows just makes me
> > > uncomfortable.
> >
> > Yeah, so the problem with PELT is that while it nicely approximates
> > variable-period decay calculations with plain additions, shifts and table
> > lookups (i.e. accelerates pow()), AFAICS the most important decay
> > parameter is fixed: the speed of decay, the dampening factor, which is
> > fixed at 32:
> >
> >   Documentation/scheduler/sched-pelt.c
> >
> >   #define HALFLIFE 32
> >
> > Right?
> >
> > Thara's numbers suggest that there's high sensitivity to the speed of
> > decay. By using PELT we'd be using whatever averaging speed there is
> > within PELT.
> >
> > Now we could make that parametric of course, but that would both
> > complicate the PELT lookup code (one more dimension) and would negatively
> > affect code generation in a number of places.
>
> I missed the other solution, which is what you suggested: by
> increasing/reducing the PELT window size we can effectively shift decay
> speed and use just a single lookup table.
>
> I.e. instead of the fixed period size of 1024 in accumulate_sum(), use
> decay_load() directly but use a different (longer) window size from 1024
> usecs to calculate 'periods', and make it a multiple of 1024.

Can't we also scale the now parameter of ___update_load_sum() ?
If we right shift it before calling ___update_load_sum, it should be
the same as using a half period of 62, 128, 256ms ...
The main drawback would be a lost of precision but we are in the range
of 2, 4, 8us compared to the 1ms window

This is quite similar to how we scale the utilization with frequency and uarch

>
> This might just work out right: with a half-life of 32 the fastest decay
> speed should be around ~20 msecs (?) - and Thara's numbers so far suggest
> that the sweet spot averaging is significantly longer, at a couple of
> hundred millisecs.
>
> Thanks,
>
>         Ingo

Powered by blists - more mailing lists