[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YK9ESqNEo+uacyMD@hirez.programming.kicks-ass.net>
Date: Thu, 27 May 2021 09:03:38 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Beata Michalska <beata.michalska@....com>,
Valentin Schneider <valentin.schneider@....com>,
linux-kernel@...r.kernel.org, mingo@...hat.com,
juri.lelli@...hat.com, vincent.guittot@...aro.org, corbet@....net,
rdunlap@...radead.org, linux-doc@...r.kernel.org
Subject: Re: [PATCH v5 2/3] sched/topology: Rework CPU capacity asymmetry
detection
On Wed, May 26, 2021 at 11:52:25AM +0200, Dietmar Eggemann wrote:
> For me asym_cpu_capacity_classify() is pretty hard to digest ;-) But I
> wasn't able to break it. It also performs correctly on (non-existing SMT)
> layer (with sd span eq. single CPU).
This is the simplest form I could come up with this morning:
static inline int
asym_cpu_capacity_classify(struct sched_domain *sd,
const struct cpumask *cpu_map)
{
struct asym_cap_data *entry;
int i = 0, n = 0;
list_for_each_entry(entry, &asym_cap_list, link) {
if (cpumask_intersects(sched_domain_span(sd), entry->cpu_mask))
i++;
else
n++;
}
if (WARN_ON_ONCE(!i) || i == 1) /* no asymmetry */
return 0;
if (n) /* partial asymmetry */
return SD_ASYM_CPUCAPACITY;
/* full asymmetry */
return SD_ASYM_CPUCAPACITY | SD_ASYM_CPUCAPACITY_FULL;
}
The early termination and everything was cute; but this isn't
performance critical code and clarity is paramount.
Powered by blists - more mailing lists