lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 4 Apr 2018 15:43:17 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Valentin Schneider <valentin.schneider@....com>
Cc:     Morten Rasmussen <morten.rasmussen@....com>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will.deacon@....com>,
        LAK <linux-arm-kernel@...ts.infradead.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Chris Redpath <chris.redpath@....com>
Subject: Re: [PATCH] sched: support dynamiQ cluster

On 4 April 2018 at 12:44, Valentin Schneider <valentin.schneider@....com> wrote:
> Hi,
>
> On 03/04/18 13:17, Vincent Guittot wrote:
>> Hi Valentin,
>>
> [...]
>>>
>>> I believe ASYM_PACKING behaves better here because the workload is only
>>> sysbench threads. As stated above, since task utilization is disregarded, I
>>
>> It behaves better because it doesn't wait for the task's utilization
>> to reach a level before assuming the task needs high compute capacity.
>> The utilization gives an idea of the running time of the task not the
>> performance level that is needed
>>
>
> That's my point actually. ASYM_PACKING disregards utilization and moves those
> threads to the big cores ASAP, which is good here because it's just sysbench
> threads.
>
> What I meant was that if the task composition changes, IOW we mix "small"
> tasks (e.g. periodic stuff) and "big" tasks (performance-sensitive stuff like
> sysbench threads), we shouldn't assume all of those require to run on a big
> CPU. The thing is, ASYM_PACKING can't make the difference between those, so

That's the 1st point where I tend to disagree: why big cores are only
for long running task and periodic stuff can't need to run on big
cores to get max compute capacity ?
You make the assumption that only long running tasks need high compute
capacity. This patch wants to always provide max compute capacity to
the system and not only long running task

> it'll all come down to which task spawned first.
>
> Furthermore, ASYM_PACKING will forcefully move tasks via active balance
> regardless of the imbalance as long as a big CPU is idle.
>
> So we could have a scenario where loads of "small" tasks spawn, and they all
> get moved to a big CPU until they're all full (because they're periodic tasks
> so the big CPUs will eventually be idle and will pull another task as long as
> they get some idle time).
>
> Then, before the load tracking signals of those tasks ramp up high enough
> that the load balancer would try to move those to LITTLE CPUs, some "big"
> tasks spawn. They get scheduled on LITTLE CPUs, and now the system will look
> balanced so nothing will be done.

As explained above, as long as the big CPUs are always used,I don't
think it's a problem. What is a problem is if a task stays on a little
CPU whereas a big CPU is idle because we can provide more throughput

>
>
> I acknowledge this all sounds convoluted but I hope it highlights what I
> think could go wrong with ASYM_PACKING on asymmetric systems.
>
> Regards,
> Valentin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ