[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <535A8585.6030103@arm.com>
Date: Fri, 25 Apr 2014 16:55:49 +0100
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Vincent Guittot <vincent.guittot@...aro.org>
CC: "peterz@...radead.org" <peterz@...radead.org>,
"mingo@...nel.org" <mingo@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"tony.luck@...el.com" <tony.luck@...el.com>,
"fenghua.yu@...el.com" <fenghua.yu@...el.com>,
"schwidefsky@...ibm.com" <schwidefsky@...ibm.com>,
"cmetcalf@...era.com" <cmetcalf@...era.com>,
"benh@...nel.crashing.org" <benh@...nel.crashing.org>,
"linux@....linux.org.uk" <linux@....linux.org.uk>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"preeti@...ux.vnet.ibm.com" <preeti@...ux.vnet.ibm.com>,
"linaro-kernel@...ts.linaro.org" <linaro-kernel@...ts.linaro.org>
Subject: Re: [PATCH v4 5/5] sched: ARM: create a dedicated scheduler topology
table
On 25/04/14 08:45, Vincent Guittot wrote:
[...]
>>
>> Back than I had
>> CPU0: cpu_corepower_mask=0-1
>> CPU2: cpu_corepower_mask=2
>> so for GMC level the cpumasks are inconsistent across CPUs and it worked.
>
> The example above is consistent because CPU2 mask and CPU0 mask are
> fully exclusive
OK, got it now. The cpu mask functions on an sd level can return
different (but then exclusive) cpu masks or they all return the same cpu
mask (DIE level in example). Like you said we still have to respect the
topology of the system.
This essentially excludes the DIE level (i.e. the sd level which spawns
all CPUs) from playing this 'sd level folding via sd degenerate' game
for a system which specifies FORCE_SD_OVERLAP to false or don't use this
SDTL_OVERLAP tl flag.
>
> so
> CPU0: cpu_corepower_mask=0-1
> CPU2: cpu_corepower_mask=2
> are consistent
>
> CPU0: cpu_corepower_mask=0-2
> CPU2: cpu_corepower_mask=0-2
> are also consistent
>
> but
>
> CPU0: cpu_corepower_mask=0-1
> CPU2: cpu_corepower_mask=0-2
> are not consistent
>
> and your example uses the last configuration
>
> To be more precise, the rule above applies on default SDT definition
> but the flag SD_OVERLAP enables such kind of overlap between group.
> Have you tried it ?
Setting FORCE_SD_OVERLAP indeed changes the scenario a bit (we're now
using build_overlap_sched_groups() instead of build_sched_groups()). It
looks better, but the groups for CPU0/1 in DIE level are wrong (to get
so far I still have to comment out the check that 'if cpu_map is equal
to sd span of sd then break' in build_sched_domains() though).
dmesg snippet:
CPU0 attaching sched-domain:
domain 0: span 0-1 level MC
groups: 0 1
domain 1: span 0-4 level DIE
groups: 0-4 (cpu_power = 5120) 0-1 (cpu_power = 2048) <-- error !!!
CPU1 attaching sched-domain:
domain 0: span 0-1 level MC
groups: 1 0
domain 1: span 0-4 level DIE
groups: 0-1 (cpu_power = 2048) 0-4 (cpu_power = 5120) <-- error !!!
CPU2 attaching sched-domain:
domain 0: span 2-4 level GMC
groups: 2 3 4
domain 1: span 0-4 level GDIE
groups: 2-4 (cpu_power = 3072) 0-1 (cpu_power = 2048)
...
The feature I'm currently working on is to add sd energy information to
sd levels of the sd topology level table. I essentially added another
column of sd energy related func ptr's (next to the flags related one)
and wanted to use 'sd level folding via sd degenerate' in MC and DIE
level to have different sd energy information per CPU.
In any case, this dependency to FORCE_SD_OVERLAP would be less then nice
for this feature though :-( A way out would be a 'int cpu' parameter but
we already discussed this back then for the flag function.
Thanks,
-- Dietmar
>
> Vincent
>
[...]
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists