lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 17 Apr 2020 16:55:33 +0200
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     Juri Lelli <juri.lelli@...hat.com>
Cc:     Valentin Schneider <valentin.schneider@....com>,
        luca abeni <luca.abeni@...tannapisa.it>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Wei Wang <wvw@...gle.com>, Quentin Perret <qperret@...gle.com>,
        Alessio Balsini <balsini@...gle.com>,
        Pavan Kondeti <pkondeti@...eaurora.org>,
        Patrick Bellasi <patrick.bellasi@...bug.net>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Qais Yousef <qais.yousef@....com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/4] sched/deadline: Improve admission control for
 asymmetric CPU capacities

On 17.04.20 14:19, Juri Lelli wrote:
> On 09/04/20 19:29, Dietmar Eggemann wrote:

[...]

>> Maybe we can do a hybrid. We have rd->span and rd->sum_cpu_capacity and
>> with the help of an extra per-cpu cpumask we could just
> 
> Hummm, I like the idea, but
> 
>> DEFINE_PER_CPU(cpumask_var_t, dl_bw_mask);
>>
>> dl_bw_cpus(int i) {
> 
> This works if calls are always local to the rd we are interested into
> (argument 'i' isn't used). Are we always doing that?

I thought so. The existing dl_bw_cpus(int i) implementation already
assumes this by using:

    struct root_domain *rd = cpu_rq(i)->rd;

    ...

    for_each_cpu_and(i, rd->span, cpu_active_mask)

Or did you refer to something else here?

And the patch would not introduce new places in which we call
dl_bw_cpus(). It will just replace some with a dl_bw_capacity() call.

>>     struct cpumask *cpus = this_cpu_cpumask_var_ptr(dl_bw_mask);
>>     ...
>>     cpumask_and(cpus, rd->span, cpu_active_mask);
>>
>>     return cpumask_weight(cpus);
>> }
>>
>> and
>>
>> dl_bw_capacity(int i) {
>>
>>     struct cpumask *cpus = this_cpu_cpumask_var_ptr(dl_bw_mask);
>>     ...
>>     cpumask_and(cpus, rd->span, cpu_active_mask);
>>     if (cpumask_equal(cpus, rd->span))
>>         return rd->sum_cpu_capacity;
> 
> What if capacities change between invocations (with the same span)?
> Can that happen?

The CPU capacity should only change during initial bring-up. On
asymmetric CPU capacity systems we have to re-create the Sched Domain
(SD) topology after CPUfreq becomes available.

After the initial build and this first rebuild of the SD topology, the
CPU capacity should be stable.

Everything which might follow afterwards (starting EAS, exclusive
cpusets or CPU hp) will not change the CPU capacity.

Obviously, if you defer loading CPUfreq driver after you started DL
scheduling you can break things. But this is not considered a valid
environment here.

[...]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ