lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <98ad8837-b9b8-ff50-5a91-8d5951ee757c@arm.com>
Date:   Wed, 26 May 2021 11:52:25 +0200
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     Beata Michalska <beata.michalska@....com>,
        Valentin Schneider <valentin.schneider@....com>
Cc:     linux-kernel@...r.kernel.org, peterz@...radead.org,
        mingo@...hat.com, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, corbet@....net, rdunlap@...radead.org,
        linux-doc@...r.kernel.org
Subject: Re: [PATCH v5 2/3] sched/topology: Rework CPU capacity asymmetry
 detection

On 25/05/2021 12:29, Beata Michalska wrote:
> On Tue, May 25, 2021 at 10:53:07AM +0100, Valentin Schneider wrote:
>> On 24/05/21 23:55, Beata Michalska wrote:
>>> On Mon, May 24, 2021 at 07:01:04PM +0100, Valentin Schneider wrote:
>>>> On 24/05/21 11:16, Beata Michalska wrote:

[...]

>>>>> +static inline int
>>>>> +asym_cpu_capacity_classify(struct sched_domain *sd,
>>>>> +			   const struct cpumask *cpu_map)
>>>>> +{
>>>>> +	int sd_asym_flags = SD_ASYM_CPUCAPACITY | SD_ASYM_CPUCAPACITY_FULL;
>>>>> +	struct asym_cap_data *entry;
>>>>> +	int asym_cap_count = 0;
>>>>> +
>>>>> +	if (list_is_singular(&asym_cap_list))
>>>>> +		goto leave;
>>>>> +
>>>>> +	list_for_each_entry(entry, &asym_cap_list, link) {
>>>>> +		if (cpumask_intersects(sched_domain_span(sd), entry->cpu_mask)) {
>>>>> +			++asym_cap_count;
>>>>> +		} else {
>>>>> +			/*
>>>>> +			 * CPUs with given capacity might be offline
>>>>> +			 * so make sure this is not the case
>>>>> +			 */
>>>>> +			if (cpumask_intersects(entry->cpu_mask, cpu_map)) {
>>>>> +				sd_asym_flags &= ~SD_ASYM_CPUCAPACITY_FULL;
>>>>> +				if (asym_cap_count > 1)
>>>>> +					break;
>>>>> +			}
>>>>
>>>> Readability nit: That could be made into an else if ().
>>> It could but then this way the -comment- gets more exposed.
>>> But that might be my personal perception so I can change that.
>>
>> As always those are quite subjective! Methink something like this would
>> still draw attention to the offline case:
>>
>>                /*
>>                 * Count how many unique capacities this domain covers. If a
>>                 * capacity isn't covered, we need to check if any CPU with
>>                 * that capacity is actually online, otherwise it can be
>>                 * ignored.
>>                 */
>>                 if (cpumask_intersects(sched_domain_span(sd), entry->cpu_mask)) {
>>                         ++asym_cap_count;
>>                 } else if (cpumask_intersects(entry->cpu_mask, cpu_map)) {
>>                         sd_asym_flags &= ~SD_ASYM_CPUCAPACITY_FULL;
>>                         if (asym_cap_count > 1)
>>                                 break;
>>                 }
> Noted.
> Will wait for some more comments before sending out 'polished' version.

For me asym_cpu_capacity_classify() is pretty hard to digest ;-) But I
wasn't able to break it. It also performs correctly on (non-existing SMT)
layer (with sd span eq. single CPU).

Something like this (separating asym_cap_list iteration and flags
construction would be easier for me. But like already said here,
it's subjective.
I left the two optimizations (list_is_singular(), break on asym_cap_count
> 1) out for now. asym_cap_list shouldn't have > 4 entries (;-)).

static inline int
asym_cpu_capacity_classify(struct sched_domain *sd, 
                           const struct cpumask *cpu_map)
{
        int sd_span_match = 0, cpu_map_match = 0, flags = 0; 
        struct asym_cap_data *entry;

        list_for_each_entry(entry, &asym_cap_list, link) {
                if (cpumask_intersects(sched_domain_span(sd), entry->cpu_mask))
                        ++sd_span_match;
                else if (cpumask_intersects(cpu_map, entry->cpu_mask))
                        ++cpu_map_match;
        }

        WARN_ON_ONCE(!sd_span_match);

        if (sd_span_match > 1) { 
                flags |= SD_ASYM_CPUCAPACITY;
                if (!cpu_map_match)
                        flags |= SD_ASYM_CPUCAPACITY_FULL;
        }

        return flags;
}

BTW, how would this mechanism behave on a system with SMT and asymmetric CPU
capacity? Something EAS wouldn't allow but I guess asym_cap_list will be
constructed and the SD_ASYM_CPUCAPACITY_XXX flags will be set?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ