[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8dff6ae5ffaebfbcc55a01c04420fd478070b830.camel@infradead.org>
Date: Thu, 23 Mar 2023 23:12:21 +0000
From: David Woodhouse <dwmw2@...radead.org>
To: Thomas Gleixner <tglx@...utronix.de>,
Usama Arif <usama.arif@...edance.com>, kim.phillips@....com,
brgerst@...il.com
Cc: piotrgorski@...hyos.org, oleksandr@...alenko.name,
arjan@...ux.intel.com, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, hpa@...or.com, x86@...nel.org,
pbonzini@...hat.com, paulmck@...nel.org,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
rcu@...r.kernel.org, mimoja@...oja.de, hewenliang4@...wei.com,
thomas.lendacky@....com, seanjc@...gle.com, pmenzel@...gen.mpg.de,
fam.zheng@...edance.com, punit.agrawal@...edance.com,
simon.evans@...edance.com, liangma@...ngbit.com,
gpiccoli@...lia.com
Subject: Re: [PATCH v16 3/8] cpu/hotplug: Add dynamic parallel bringup
states before CPUHP_BRINGUP_CPU
On Fri, 2023-03-24 at 00:05 +0100, Thomas Gleixner wrote:
> Still the rest can be simplified as below.
...
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c
> @@ -1504,13 +1504,45 @@ int bringup_hibernate_cpu(unsigned int s
>
> void bringup_nonboot_cpus(unsigned int setup_max_cpus)
> {
> - unsigned int cpu;
> + unsigned int cpu, n = 1;
>
> + /*
> + * On architectures which have setup the CPUHP_BP_PARALLEL_STARTUP
> + * state, this invokes all BP prepare states and the parallel
> + * startup state sends the startup IPI to each of the to be onlined
> + * APs. This avoids waiting for each AP to respond to the startup
> + * IPI in CPUHP_BRINGUP_CPU. The APs proceed through the low level
> + * bringup code and then wait for the control CPU to release them
> + * one by one for the final onlining procedure in the loop below.
> + *
> + * For architectures which do not support parallel bringup all
> + * states are fully serialized in the loop below.
> + */
> + if (!cpuhp_step_empty(true, CPUHP_BP_PARALLEL_STARTUP) {
I'll take using cpuhp_step_empty().
> + for_each_present_cpu(cpu) {
> + if (n++ >= setup_max_cpus)
> + break;
> + cpu_up(cpu, CPUHP_BP_PARALLEL_STARTUP);
> + }
> + }
> +
> + /* Do the per CPU serialized bringup to ONLINE state */
> for_each_present_cpu(cpu) {
> if (num_online_cpus() >= setup_max_cpus)
> break;
> - if (!cpu_online(cpu))
> - cpu_up(cpu, CPUHP_ONLINE);
> +
> + if (!cpu_online(cpu)) {
> + struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
> + int ret = cpu_up(cpu, CPUHP_ONLINE);
> +
> + /*
> + * Due to the above preparation loop a failed online attempt
> + * might have only rolled back to CPUHP_BP_PARALLEL_STARTUP. Do the
> + * remaining cleanups. NOOP for the non parallel case.
> + */
> + if (ret && can_rollback_cpu(st))
> + WARN_ON(cpuhp_invoke_callback_range(false, cpu, st, CPUHP_OFFLINE));
> + }
And I'll take doing this bit unconditionally (it's basically a no-op if
they already got rolled all the way back to CPUHP_OFFLINE, right?).
But the additional complexity of having multiple steps is fairly
minimal, and I'm already planning to *use* another one even in x86, as
discussed.
Download attachment "smime.p7s" of type "application/pkcs7-signature" (5965 bytes)
Powered by blists - more mailing lists