lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180803155547.sxlhxpmhwcoappit@queper01-lin>
Date:   Fri, 3 Aug 2018 16:55:49 +0100
From:   Quentin Perret <quentin.perret@....com>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        "Rafael J. Wysocki" <rjw@...ysocki.net>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        "open list:THERMAL" <linux-pm@...r.kernel.org>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
        Ingo Molnar <mingo@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Chris Redpath <chris.redpath@....com>,
        Patrick Bellasi <patrick.bellasi@....com>,
        Valentin Schneider <valentin.schneider@....com>,
        Thara Gopinath <thara.gopinath@...aro.org>,
        viresh kumar <viresh.kumar@...aro.org>,
        Todd Kjos <tkjos@...gle.com>,
        Joel Fernandes <joel@...lfernandes.org>,
        "Cc: Steve Muckle" <smuckle@...gle.com>, adharmap@...cinc.com,
        "Kannan, Saravana" <skannan@...cinc.com>, pkondeti@...eaurora.org,
        Juri Lelli <juri.lelli@...hat.com>,
        Eduardo Valentin <edubezval@...il.com>,
        Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
        currojerez@...eup.net, Javi Merino <javi.merino@...nel.org>
Subject: Re: [PATCH v5 09/14] sched: Add over-utilization/tipping point
 indicator

On Friday 03 Aug 2018 at 15:49:24 (+0200), Vincent Guittot wrote:
> On Fri, 3 Aug 2018 at 10:18, Quentin Perret <quentin.perret@....com> wrote:
> >
> > On Friday 03 Aug 2018 at 09:48:47 (+0200), Vincent Guittot wrote:
> > > On Thu, 2 Aug 2018 at 18:59, Quentin Perret <quentin.perret@....com> wrote:
> > > I'm not really concerned about re-enabling load balance but more that
> > > the effort of packing of tasks in few cpus/clusters that EAS tries to
> > > do can be broken for every new task.
> >
> > Well, re-enabling load balance immediately would break the nice placement
> > that EAS found, because it would shuffle all tasks around and break the
> > packing strategy. Letting that sole new task go in find_idlest_cpu()
> 
> Sorry I was not clear in my explanation. Re enabling load balance
> would be a problem of course. I wanted to say that there is few chance
> that this will re-enable the load balance immediately and break EAS
> and I'm not worried by this case. But i'm only concerned by the new
> task being put outside EAS policy.
> 
> For example, if you run on hikey960 the simple script below, which
> can't really be seen as a fork bomb IMHO, you will see threads
> scheduled on big cores every 0.5 seconds whereas everything should be
> packed on little core

I guess that also depends on what's running on the little cores, but I
see your point.

I think we're discussing two different things right now:
    1. Should forkees go in find_energy_efficient_cpu() ?
    2. Should forkees have 0 of initial util_avg when EAS is enabled ?

For 1, that would mean all forkees go on prev_cpu. I can see how that
can be more energy-efficient in some use-cases (the one you described
for example), but that also has drawbacks. Placing the task on a big
CPU can have an energy cost, but that should also help the task build
it's utilization faster, which is what we want to make smart decisions
with EAS. Also, it isn't always true that going on the little CPUs is
more energy efficient, only the Energy Model can tell. There is just no
perfect solution, so I'm still not fully decided on that one ...

For 2, I'm a little bit more reluctant, because that has more
implications ... That could probably harm some fairly standard use
cases (an simple app-launch for example). Enqueueing something new on a
CPU would go unnoticed, which might be fine for a very small task, but
probably a major issue if the task is actually big. I'd be more
comfortable with 2 only if we also speed-up the PELT half-life TBH ...

Is there a 3 that I missed ?

Thanks,
Quentin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ