lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <306502b9-522a-43d9-1209-15675009bf1b@arm.com>
Date:   Thu, 6 Aug 2020 13:25:49 +0200
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     Valentin Schneider <valentin.schneider@....com>,
        linux-kernel@...r.kernel.org
Cc:     mingo@...nel.org, peterz@...radead.org, vincent.guittot@...aro.org,
        morten.rasmussen@....com, Quentin Perret <qperret@...gle.com>
Subject: Re: [PATCH v4 00/10] sched: Instrument sched domain flags

On 31/07/2020 13:54, Valentin Schneider wrote:
> Hi,
> 
> I've repeatedly stared at an SD flag and asked myself "how should that be
> set up in the domain hierarchy anyway?". I figured that if we formalize our
> flags zoology a bit, we could also do some runtime assertions on them -
> this is what this series is all about.
> 
> Patches
> =======
> 
> The idea is to associate the flags with metaflags that describes how they
> should be set in a sched domain hierarchy ("if this SD has it, all its {parents,
> children} have it") or how they behave wrt degeneration - details are in the
> comments and commit logs. 
> 
> The good thing is that the debugging bits go away when CONFIG_SCHED_DEBUG isn't
> set. The bad thing is that this replaces SD_* flags definitions with some
> unsavoury macros. This is mainly because I wanted to avoid having to duplicate
> work between declaring the flags and declaring their metaflags.
> 
> o Patches 1-3 are topology cleanups / fixes
> o Patches 4-6 instrument SD flags and add assertions
> o Patches 7-10 leverage the instrumentation to factorize domain degeneration
> 
> Revisions
> =========
> 
> v3 -> v4
> --------
> 
> o Reordered the series to have fixes / cleanups first
> 
> o Added SD_ASYM_CPUCAPACITY propagation (Quentin)
> o Made ARM revert back to the default sched topology (Dietmar)
> o Removed SD_SERIALIZE degeneration special case (Peter)
> 
> o Made SD_NUMA and SD_SERIALIZE have SDF_NEEDS_GROUPS
> 
>   As discussed on v3, I thought this wasn't required, but thinking some more
>   about it there can be cases where that changes the current behaviour. For
>   instance, in the following wacky triangle:
> 
>       0\ 30
>       | \
>   20  |  2
>       | /
>       1/ 30
> 
>   there are two unique distances thus two NUMA topology levels, however the
>   first one for node 2 would have the same span as its child domain and thus
>   should be degenerated. If we don't give SD_NUMA and SD_SERIALIZE
>   SDF_NEEDS_GROUPS, this domain wouldn't be denegerated since its child
>   *doesn't* have either SD_NUMA or SD_SERIALIZE (it's the first NUMA domain),
>   and we'd have this weird NUMA domain lingering with a single group.

LGTM.

Tested on Arm & Arm64 dual-cluster big.LITTLE (so only
default_topology[]) with CONFIG_SCHED_MC=y for the following cases:

(1) normal bring-up
(2) CPU hp all but one CPU of one cluster
(3) CPU hp entire cluster

Reviewed-by: Dietmar Eggemann <dietmar.eggemann@....com>
Tested-by: Dietmar Eggemann <dietmar.eggemann@....com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ