[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r1jfmn8d.mognet@arm.com>
Date: Mon, 12 Apr 2021 18:22:42 +0100
From: Valentin Schneider <valentin.schneider@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: tglx@...utronix.de, mingo@...nel.org, bigeasy@...utronix.de,
swood@...hat.com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, vincent.donnefort@....com, qais.yousef@....com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] sched: Use cpu_dying() to fix balance_push vs hotplug-rollback
On 12/04/21 14:03, Peter Zijlstra wrote:
> On Thu, Mar 11, 2021 at 03:13:04PM +0000, Valentin Schneider wrote:
>> Peter Zijlstra <peterz@...radead.org> writes:
>> > @@ -7910,6 +7908,14 @@ int sched_cpu_deactivate(unsigned int cp
>> > }
>> > rq_unlock_irqrestore(rq, &rf);
>> >
>> > + /*
>> > + * From this point forward, this CPU will refuse to run any task that
>> > + * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
>> > + * push those tasks away until this gets cleared, see
>> > + * sched_cpu_dying().
>> > + */
>> > + balance_push_set(cpu, true);
>> > +
>>
>> AIUI with cpu_dying_mask being flipped before even entering
>> sched_cpu_deactivate(), we don't need this to be before the
>> synchronize_rcu() anymore; is there more than that to why you're punting it
>> back this side of it?
>
> I think it does does need to be like this, we need to clearly separate
> the active=true and balance_push_set(). If we were to somehow observe
> both balance_push_set() and active==false, we'd be in trouble.
>
I'm afraid I don't follow; we're replacing a read of rq->balance_push with
cpu_dying(), and those are still written on the same side of the
synchronize_rcu(). What am I missing?
>> > #ifdef CONFIG_SCHED_SMT
>> > /*
>> > * When going down, decrement the number of cores with SMT present.
>>
>> > @@ -8206,7 +8212,7 @@ void __init sched_init(void)
>> > rq->sd = NULL;
>> > rq->rd = NULL;
>> > rq->cpu_capacity = rq->cpu_capacity_orig = SCHED_CAPACITY_SCALE;
>> > - rq->balance_callback = NULL;
>> > + rq->balance_callback = &balance_push_callback;
>> > rq->active_balance = 0;
>> > rq->next_balance = jiffies;
>> > rq->push_cpu = 0;
>> > @@ -8253,6 +8259,7 @@ void __init sched_init(void)
>> >
>> > #ifdef CONFIG_SMP
>> > idle_thread_set_boot_cpu();
>> > + balance_push_set(smp_processor_id(), false);
>> > #endif
>> > init_sched_fair_class();
>> >
>>
>> I don't get what these two changes do - the end result is the same as
>> before, no?
>
> Not quite; we have to make sure the state of the offline CPUs matches
> that of a CPU that's been offlined. For consistency if nothing else, but
> it's actually important for a point below.
>
>> Also, AIUI this patch covers the cpu_dying -> !cpu_dying rollback case
>> since balance_push gets numbed down by !cpu_dying. What about the other way
>> around (hot-plug failure + rollback)? We may have allowed !pcpu tasks on the
>> now-dying CPU, and we'd need to re-install the balance_push callback.
>
> This is in fact handled. Note how the previous point initialized the
> offline CPU to have the push_callback installed.
>
> Also note how balance_push() re-instates itself unconditionally.
>
> So the thing is, we install the push callback on deactivate() and leave
> it in place until activate, it is always there, regardless what way
> we're moving.
>
> However, it is only effective whild going down, see the early exit.
Oooh, I can't read, only the boot CPU gets its callback uninstalled in
sched_init()! So secondaries keep push_callback installed up until
sched_cpu_activate(), but as you said it's not effective unless a rollback
happens.
Now, doesn't that mean we should *not* uninstall the callback in
sched_cpu_dying()? AFAIK it's possible for the initial secondary CPU
boot to go fine, but the next offline+online cycle fails while going up -
that would need to rollback with push_callback installed.
Powered by blists - more mailing lists