lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <jhjwo7bkn2h.mognet@arm.com>
Date:   Mon, 23 Mar 2020 17:17:42 +0000
From:   Valentin Schneider <valentin.schneider@....com>
To:     Dietmar Eggemann <dietmar.eggemann@....com>
Cc:     linux-kernel@...r.kernel.org, mingo@...nel.org,
        peterz@...radead.org, vincent.guittot@...aro.org
Subject: Re: [PATCH v2 3/9] sched: Remove checks against SD_LOAD_BALANCE


On Mon, Mar 23 2020, Dietmar Eggemann wrote:

> On 19.03.20 13:05, Valentin Schneider wrote:
>>
>> On Thu, Mar 19 2020, Dietmar Eggemann wrote:
>>> On 11.03.20 19:15, Valentin Schneider wrote:
>
> [...]
>
>> Your comments make me realize that changelog isn't great, what about the
>> following?
>>
>> ---
>>
>> The SD_LOAD_BALANCE flag is set unconditionally for all domains in
>> sd_init(). By making the sched_domain->flags syctl interface read-only, we
>> have removed the last piece of code that could clear that flag - as such,
>> it will now be always present. Rather than to keep carrying it along, we
>> can work towards getting rid of it entirely.
>>
>> cpusets don't need it because they can make CPUs be attached to the NULL
>> domain (e.g. cpuset with sched_load_balance=0), or to a partitionned
>
> s/partitionned/partitioned
>
>> root_domain, i.e. a sched_domain hierarchy that doesn't span the entire
>> system (e.g. root cpuset with sched_load_balance=0 and sibling cpusets with
>> sched_load_balance=1).
>>
>> isolcpus apply the same "trick": isolated CPUs are explicitly taken out of
>> the sched_domain rebuild (using housekeeping_cpumask()), so they get the
>> NULL domain treatment as well.
>>
>> Remove the checks against SD_LOAD_BALANCE.
>
> Sounds better to me:
>
> Essentially, I was referring to examples like:
>
> Hikey960 - 2x4
>
> (A) exclusive cpusets:
>
> root@...0:/sys/fs/cgroup/cpuset# mkdir cs1
> root@...0:/sys/fs/cgroup/cpuset# echo 1 > cs1/cpuset.cpu_exclusive
> root@...0:/sys/fs/cgroup/cpuset# echo 0 > cs1/cpuset.mems
> root@...0:/sys/fs/cgroup/cpuset# echo 0-2 > cs1/cpuset.cpus
> root@...0:/sys/fs/cgroup/cpuset# mkdir cs2
> root@...0:/sys/fs/cgroup/cpuset# echo 1 > cs2/cpuset.cpu_exclusive
> root@...0:/sys/fs/cgroup/cpuset# echo 0 > cs2/cpuset.mems
> root@...0:/sys/fs/cgroup/cpuset# echo 3-5 > cs2/cpuset.cpus
> root@...0:/sys/fs/cgroup/cpuset# echo 0 > cpuset.sched_load_balance
>

AFAICT you don't even have to bother with cpuset.cpu_exclusive if you
only care about the end-result wrt sched_domains.

> root@...0:/proc/sys/kernel# tree -d sched_domain
>
> ├── cpu0
> │   └── domain0
> ├── cpu1
> │   └── domain0
> ├── cpu2
> │   └── domain0
> ├── cpu3
> │   └── domain0
> ├── cpu4
> │   ├── domain0
> │   └── domain1
> ├── cpu5
> │   ├── domain0
> │   └── domain1
> ├── cpu6
> └── cpu7
>
> (B) non-exclusive cpuset:
>
> root@...0:/sys/fs/cgroup/cpuset# echo 0 > cpuset.sched_load_balance
>
> [ 8661.240385] CPU1 attaching NULL sched-domain.
> [ 8661.244802] CPU2 attaching NULL sched-domain.
> [ 8661.249255] CPU3 attaching NULL sched-domain.
> [ 8661.253623] CPU4 attaching NULL sched-domain.
> [ 8661.257989] CPU5 attaching NULL sched-domain.
> [ 8661.262363] CPU6 attaching NULL sched-domain.
> [ 8661.266730] CPU7 attaching NULL sched-domain.
>
> root@...0:/sys/fs/cgroup/cpuset# mkdir cs1
> root@...0:/sys/fs/cgroup/cpuset# echo 0-5 > cs1/cpuset.cpus
>
> root@...0:/proc/sys/kernel# tree -d sched_domain
>
> ├── cpu0
> │   ├── domain0
> │   └── domain1
> ├── cpu1
> │   ├── domain0
> │   └── domain1
> ├── cpu2
> │   ├── domain0
> │   └── domain1
> ├── cpu3
> │   ├── domain0
> │   └── domain1
> ├── cpu4
> │   ├── domain0
> │   └── domain1
> ├── cpu5
> │   ├── domain0
> │   └── domain1
> ├── cpu6
> └── cpu7

I think my updated changelog covers those cases, right?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ