lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 20 Feb 2018 16:22:22 +0000
From:   Morten Rasmussen <morten.rasmussen@....com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     mingo@...hat.com, valentin.schneider@....com,
        dietmar.eggemann@....com, vincent.guittot@...aro.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/7] sched/fair: Avoid unnecessary balancing of
 asymmetric capacity groups

On Mon, Feb 19, 2018 at 03:53:35PM +0100, Peter Zijlstra wrote:
> On Mon, Feb 19, 2018 at 03:50:12PM +0100, Peter Zijlstra wrote:
> > On Thu, Feb 15, 2018 at 04:20:51PM +0000, Morten Rasmussen wrote:
> > > On systems with asymmetric cpu capacities, a skewed load distribution
> > > might yield better throughput than balancing load per group capacity.
> > > For example, preferring high capacity cpus for compute intensive tasks
> > > leaving low capacity cpus idle rather than balancing the number of idle
> > > cpus across different cpu types. Instead, let load-balance back off if
> > > the busiest group isn't really overloaded.
> > 
> > I'm sorry. I just can't seem to make sense of that today. What?
> 
> Aah, you're saying that is we have 4+4 bit.little and 4 runnable tasks,
> we would like them running on our big cluster and leave the small
> cluster entirely idle, instead of runnint 2+2.

Yes. Ideally, when are busy, we want to fill up the big cpus first, one
task on each, and if we have more tasks we start using little cpus
before putting additional tasks on any of the bigs. The load per group
capacity metric is sort of working against that up until roughly the
point where we have enough tasks for all the cpus.

> So what about this DynamicQ nonsense? Or is that so unstructured we
> can't really do anything sensible with it?

With DynamiQ we don't have any grouping of the cpu types provided by the
architecture. So we are going end up balancing between individual cpus.
I hope these tweaks should work for DynamiQ too. Always-running tasks on
little cpus should to be flagged as misfits and be picked up by big
cpus. It is still to be verified though.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ