[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c5fec41b-87f1-be4e-475f-69c7394f5467@arm.com>
Date: Tue, 15 Oct 2019 11:22:12 +0200
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Valentin Schneider <valentin.schneider@....com>,
Quentin Perret <qperret@...gle.com>
Cc: Vincent Guittot <vincent.guittot@...aro.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Morten Rasmussen <morten.rasmussen@....com>,
Quentin Perret <qperret@...rret.net>,
"# v4 . 16+" <stable@...r.kernel.org>
Subject: Re: [PATCH] sched/topology: Disable sched_asym_cpucapacity on domain
destruction
On 14/10/2019 18:03, Valentin Schneider wrote:
> On 14/10/2019 14:52, Quentin Perret wrote:
>> Right, but that's not possible by definition -- static keys aren't
>> variables. The static keys for asym CPUs and for EAS are just to
>> optimize the case when they're disabled, but when they _are_ enabled,
>> you have no choice but do another per-rd check.
>>
>
> Bleh, right, realized my nonsense after sending the email.
>
>> And to clarify what I tried to say before, it might be possible to
>> 'count' the number of RDs that have SD_ASYM_CPUCAPACITY set using
>> static_branch_inc()/dec(), like we do for the SMT static key. I remember
>> trying to do something like that for EAS, but that was easier said than
>> done ... :)
>>
>
> Gotcha, the reason I didn't go with this is that I wanted to maintain
> the relationship between the key and the flag (you either have both or none).
> It feels icky to have the key set and to have a NULL sd_asym_cpucapacity
> pointer.
>
> An alternative might be to have a separate counter for asymmetric rd's,
> always disable the key on domain destruction and use that counter to figure
> out if we need to restore it. If we don't care about having a NULL SD
> pointer while the key is set, we could use the included counter as you're
> suggesting.
I still don't understand the benefit of the counter approach here.
sched_smt_present counts the number of cores with SMT. So in case you
have 2 SMT cores with 2 HW threads and you CPU hp out one CPU, you still
have sched_smt_present, although 1 CPU doesn't have a SMT thread sibling
anymore.
Valentin's patch makes sure that sched_asym_cpucapacity is correctly set
when the sd hierarchy is rebuild due to CPU hp. Including the unlikely
scenario that an asymmetric CPU capacity system (based on DT's
capacity-dmips-mhz values) turns normal SMT because of the max frequency
values of the CPUs involved.
Systems with a mix of asymmetric and symmetric CPU capacity rd's have to
live with the fact that wake_cap and misfit handling is enabled for
them. This should be the case already today.
There should be no SD_ASYM_CPUCAPACITY flag on the sd's of the CPUs of
the symmetric CPU capacity rd's. I.e. update_top_cache_domain() should
set sd_asym_cpucapacity=NULL for those CPUs.
So as a rule we could say even if a static key enables a code path, a
derefenced sd still has to be checked against NULL.
Powered by blists - more mailing lists