[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210525140400.GA9291@e120325.cambridge.arm.com>
Date: Tue, 25 May 2021 15:04:01 +0100
From: Beata Michalska <beata.michalska@....com>
To: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: linux-kernel@...r.kernel.org, peterz@...radead.org,
mingo@...hat.com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, valentin.schneider@....com,
corbet@....net, rdunlap@...radead.org, linux-doc@...r.kernel.org
Subject: Re: [PATCH v5 2/3] sched/topology: Rework CPU capacity asymmetry
detection
On Tue, May 25, 2021 at 01:59:30PM +0200, Dietmar Eggemann wrote:
> On 25/05/2021 11:30, Beata Michalska wrote:
> > On Tue, May 25, 2021 at 10:25:36AM +0200, Dietmar Eggemann wrote:
> >> On 24/05/2021 12:16, Beata Michalska wrote:
>
> [...]
>
> >>> @@ -1266,6 +1266,112 @@ static void init_sched_groups_capacity(int cpu, struct sched_domain *sd)
> >>> update_group_capacity(sd, cpu);
> >>> }
> >>>
> >>> +/**
> >>> + * Asymmetric CPU capacity bits
> >>> + */
> >>> +struct asym_cap_data {
> >>> + struct list_head link;
> >>> + unsigned long capacity;
> >>> + struct cpumask *cpu_mask;
> >>
> >> Not sure if this has been discussed already but shouldn't the flexible
> >> array members` approach known from struct sched_group, struct
> >> sched_domain or struct em_perf_domain be used here?
> >> IIRC the last time this has been discussed in this thread:
> >> https://lkml.kernel.org/r/20200910054203.525420-2-aubrey.li@intel.com
> >>
> > If I got right the discussion you have pointed to, it was about using
> > cpumask_var_t which is not the case here. I do not mind moving the code
> > to use the array but I am not sure if this changes much. Looking at the
> > code changes to support that (to_cpumask namely) it was introduced for
> > cases where cpumask_var_t was not appropriate, which again isn't the case
> > here.
>
> Yeah, it was more about using `flexible array members` or allocating the
> cpumask separately.
>
> Looks like you're using some kind of a mixed approach:
>
> (1) struct asym_cap_data {
> ...
> struct cpumask *cpu_mask;
>
> (2) entry = kzalloc(sizeof(*entry) + cpumask_size(), GFP_KERNEL);
>
> (3) entry->cpu_mask = (struct cpumask *)((char *)entry +
> sizeof(*entry));
>
> (4) cpumask_intersects(foo, entry->cpu_mask)
>
>
> E.g. struct em_perf_domain has
>
> (1) struct em_perf_domain {
> ...
> unsigned long cpus[];
>
> (2) like yours
>
> (3) is not needed.
>
> (4) cpumask_copy(em_span_cpus(pd), foo)
>
> with #define em_span_cpus(em) (to_cpumask((em)->cpus))
>
> IMHO, it's better to keep this approach aligned between the different
> data structures.
I would actually go the other way round as it seems more 'clean'
that way and it does not need the conversion but I don't mind playing along.
---
BR
B.
Powered by blists - more mailing lists