[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xhsmhv8ud1ey9.mognet@vschneid.remote.csb>
Date: Tue, 10 May 2022 18:21:18 +0100
From: Valentin Schneider <vschneid@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
Yury Norov <yury.norov@...il.com>
Cc: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
David Laight <David.Laight@...LAB.COM>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Joe Perches <joe@...ches.com>,
Julia Lawall <Julia.Lawall@...ia.fr>,
Michał Mirosław <mirq-linux@...e.qmqm.pl>,
Nicholas Piggin <npiggin@...il.com>,
Nicolas Palix <nicolas.palix@...g.fr>,
Rasmus Villemoes <linux@...musvillemoes.dk>,
Matti Vaittinen <Matti.Vaittinen@...rohmeurope.com>,
linux-kernel@...r.kernel.org, Ben Segall <bsegall@...gle.com>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Mel Gorman <mgorman@...e.de>,
Steven Rostedt <rostedt@...dmis.org>,
Vincent Guittot <vincent.guittot@...aro.org>
Subject: Re: [PATCH 17/22] sched/core: fix opencoded cpumask_any_but()
On 10/05/22 18:37, Peter Zijlstra wrote:
> On Tue, May 10, 2022 at 08:47:45AM -0700, Yury Norov wrote:
>> sched_core_cpu_starting() and sched_core_cpu_deactivate() implement
>> opencoded cpumask_any_but(). Fix it.
>>
>> CC: Ben Segall <bsegall@...gle.com>
>> CC: Daniel Bristot de Oliveira <bristot@...hat.com>
>> CC: Dietmar Eggemann <dietmar.eggemann@....com>
>> CC: Ingo Molnar <mingo@...hat.com>
>> CC: Juri Lelli <juri.lelli@...hat.com>
>> CC: Mel Gorman <mgorman@...e.de>
>> CC: Peter Zijlstra <peterz@...radead.org>
>> CC: Steven Rostedt <rostedt@...dmis.org>
>> CC: Valentin Schneider <vschneid@...hat.com>
>> CC: Vincent Guittot <vincent.guittot@...aro.org>
>> CC: linux-kernel@...r.kernel.org
>> Signed-off-by: Yury Norov <yury.norov@...il.com>
>> ---
>> kernel/sched/core.c | 33 +++++++++++++--------------------
>> 1 file changed, 13 insertions(+), 20 deletions(-)
>>
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index f5ebc392493d..9700001948d0 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -6125,7 +6125,7 @@ static void queue_core_balance(struct rq *rq)
>> static void sched_core_cpu_starting(unsigned int cpu)
>> {
>> const struct cpumask *smt_mask = cpu_smt_mask(cpu);
>> - struct rq *rq = cpu_rq(cpu), *core_rq = NULL;
>> + struct rq *rq = cpu_rq(cpu), *core_rq;
>> unsigned long flags;
>> int t;
>>
>> @@ -6138,19 +6138,16 @@ static void sched_core_cpu_starting(unsigned int cpu)
>> goto unlock;
>>
>> /* find the leader */
>> - for_each_cpu(t, smt_mask) {
>> - if (t == cpu)
>> - continue;
>> - rq = cpu_rq(t);
>> - if (rq->core == rq) {
>> - core_rq = rq;
>> - break;
>> - }
>> - }
>> + t = cpumask_any_but(smt_mask, cpu);
>> + if (t >= nr_cpu_ids)
>> + goto unlock;
>>
>> - if (WARN_ON_ONCE(!core_rq)) /* whoopsie */
>> + rq = cpu_rq(t);
>> + if (WARN_ON_ONCE(rq->core != rq)) /* whoopsie */
>> goto unlock;
>>
>> + core_rq = rq;
>> +
>> /* install and validate core_rq */
>> for_each_cpu(t, smt_mask) {
>> rq = cpu_rq(t);
>
> I don't think this is equivalent. Imagine SMT4, with:
>
> rqN->core_rq = rq0
>
> Now, further suppose smt0-2 are online and we're about to online smt3.
> Then t above is free to be smt2, which then results in insta triggering:
>
> + if (WARN_ON_ONCE(rq->core != rq)) /* whoopsie */
>
> You seem to have lost how the first loop searches for rq->core.
>
cpumask_any() is actually cpumask_first(), so t should be smt0 in that
case. However, if for some reason rq->core isn't the first online CPU in
smt_mask \ {cpu} (which I think can happen if you offline smt0-1 then
re-online smt0), then yes that splats.
> Please, be more careful. Also, all of this is super cold path don't
> bother with optimizations. Much of the patches you have in this series
> fall under that.
I tend to agree, I do like the cpumask_weight_eq() stuff because it's a low
hanging fruit and can even be autopatched with coccinelle, but the
open-coded stuff in cold paths isn't as relevant (nor as obvious as it may
look :)).
Powered by blists - more mailing lists