[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xhsmhwneahri0.mognet@vschneid.remote.csb>
Date: Wed, 25 May 2022 10:48:55 +0100
From: Valentin Schneider <vschneid@...hat.com>
To: Phil Auld <pauld@...hat.com>
Cc: linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH] cpuhp: make target_store() a nop when target == state
On 24/05/22 15:37, Phil Auld wrote:
> Hi Valentin,
>
> I did it like this (shown below) and from my test it also works for
> this case.
>
> I could move it below the lock and goto out; instead if you think
> that is better.
I *think* the cpu_add_remove_lock mutex should be sufficient here.
> It still seems better to me to stop this higher up
> because there's work being done in the out path too. We're not
> actually doing any hot(un)plug so doing post unplug cleanup seems
> iffy.
>
I think so too; I now realize _cpu_up() and _cpu_down() have slightly
different prologues: _cpu_up() does its hotplug states / cpu_present_mask
checks *after* grabbing the cpu_hotplug_lock, _cpu_down() does that *before*...
So I believe what you have below is fine, modulo whether we want to align
the prologue of these two functions or not :-)
> _cpu_down()
> ...
> out:
> cpus_write_unlock();
> /*
> * Do post unplug cleanup. This is still protected against
> * concurrent CPU hotplug via cpu_add_remove_lock.
> */
> lockup_detector_cleanup();
> arch_smt_update();
> cpu_up_down_serialize_trainwrecks(tasks_frozen);
> return ret;
> }
>
> ----------
>
> diff --git a/kernel/cpu.c b/kernel/cpu.c
> index 8a71b1149c60..e36788742d18 100644
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c
> @@ -1130,6 +1130,13 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
> if (!cpu_present(cpu))
> return -EINVAL;
>
> + /*
> + * The caller of cpu_down() might have raced with another
> + * caller. Nothing to do.
> + */
> + if (st->state <= target)
> + return 0;
> +
> cpus_write_lock();
>
> cpuhp_tasks_frozen = tasks_frozen;
>
>
>
>
> Cheers,
> Phil
>
> --
Powered by blists - more mailing lists