[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4dd2c694-bb4b-4171-b47f-e005571518ff@paulmck-laptop>
Date: Mon, 14 Jul 2025 10:20:22 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Joel Fernandes <joelagnelf@...dia.com>, linux-kernel@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>,
Andrea Righi <arighi@...dia.com>,
Frederic Weisbecker <frederic@...nel.org>, rcu@...r.kernel.org
Subject: Re: [PATCH v2] smp: Document preemption and stop_machine() mutual
exclusion
On Tue, Jul 08, 2025 at 03:52:44PM -0700, Paul E. McKenney wrote:
> On Tue, Jul 08, 2025 at 09:23:21AM +0200, Peter Zijlstra wrote:
[ . . . ]
> > Best to just re-formulate the entire comment I think. State it provides
> > exclusion vs all non-preemptible regions in the kernel -- at insane cost
> > and should not be used when humanly possible :-)
>
> How about like this?
Hearing no objections... ;-)
Thanx, Paul
> ------------------------------------------------------------------------
>
> commit 3f0626b0514ccda56d15995e5bd1d1552f828705
> Author: Paul E. McKenney <paulmck@...nel.org>
> Date: Tue Jul 8 15:47:29 2025 -0700
>
> stop_machine: Improve kernel-doc function-header comments
>
> Add more detail to the kernel-doc function-header comments for
> stop_machine(), stop_machine_cpuslocked(), and stop_core_cpuslocked().
>
> Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
>
> diff --git a/include/linux/stop_machine.h b/include/linux/stop_machine.h
> index 3132262a404dc..72820503514cc 100644
> --- a/include/linux/stop_machine.h
> +++ b/include/linux/stop_machine.h
> @@ -88,55 +88,73 @@ static inline void print_stop_info(const char *log_lvl, struct task_struct *task
> #endif /* CONFIG_SMP */
>
> /*
> - * stop_machine "Bogolock": stop the entire machine, disable
> - * interrupts. This is a very heavy lock, which is equivalent to
> - * grabbing every spinlock (and more). So the "read" side to such a
> - * lock is anything which disables preemption.
> + * stop_machine "Bogolock": stop the entire machine, disable interrupts.
> + * This is a very heavy lock, which is equivalent to grabbing every raw
> + * spinlock (and more). So the "read" side to such a lock is anything
> + * which disables preemption.
> */
> #if defined(CONFIG_SMP) || defined(CONFIG_HOTPLUG_CPU)
>
> /**
> * stop_machine: freeze the machine on all CPUs and run this function
> * @fn: the function to run
> - * @data: the data ptr for the @fn()
> - * @cpus: the cpus to run the @fn() on (NULL = any online cpu)
> + * @data: the data ptr to pass to @fn()
> + * @cpus: the cpus to run @fn() on (NULL = run on each online CPU)
> *
> - * Description: This causes a thread to be scheduled on every cpu,
> - * each of which disables interrupts. The result is that no one is
> - * holding a spinlock or inside any other preempt-disabled region when
> - * @fn() runs.
> + * Description: This causes a thread to be scheduled on every CPU, which
> + * will run with interrupts disabled. Each CPU specified by @cpus will
> + * run @fn. While @fn is executing, there will no other CPUs holding
> + * a raw spinlock or running within any other type of preempt-disabled
> + * region of code.
> *
> - * This can be thought of as a very heavy write lock, equivalent to
> - * grabbing every spinlock in the kernel.
> + * When @cpus specifies only a single CPU, this can be thought of as
> + * a reader-writer lock where readers disable preemption (for example,
> + * by holding a raw spinlock) and where the insanely heavy writers run
> + * @fn while also preventing any other CPU from doing any useful work.
> + * These writers can also be thought of as having implicitly grabbed every
> + * raw spinlock in the kernel.
> *
> - * Protects against CPU hotplug.
> + * When @fn is a no-op, this can be thought of as an RCU implementation
> + * where readers again disable preemption and writers use stop_machine()
> + * in place of synchronize_rcu(), albeit with orders of magnitude more
> + * disruption than even that of synchronize_rcu_expedited().
> + *
> + * Although only one stop_machine() operation can proceed at a time,
> + * the possibility of blocking in cpus_read_lock() means that the caller
> + * cannot usefully rely on this serialization.
> + *
> + * Return: 0 if all invocations of @fn return zero. Otherwise, the
> + * value returned by an arbitrarily chosen member of the set of calls to
> + * @fn that returned non-zero.
> */
> int stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus);
>
> /**
> * stop_machine_cpuslocked: freeze the machine on all CPUs and run this function
> * @fn: the function to run
> - * @data: the data ptr for the @fn()
> - * @cpus: the cpus to run the @fn() on (NULL = any online cpu)
> + * @data: the data ptr to pass to @fn()
> + * @cpus: the cpus to run @fn() on (NULL = run on each online CPU)
> + *
> + * Same as above. Avoids nested calls to cpus_read_lock().
> *
> - * Same as above. Must be called from with in a cpus_read_lock() protected
> - * region. Avoids nested calls to cpus_read_lock().
> + * Context: Must be called from within a cpus_read_lock() protected region.
> */
> int stop_machine_cpuslocked(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus);
>
> /**
> * stop_core_cpuslocked: - stop all threads on just one core
> * @cpu: any cpu in the targeted core
> - * @fn: the function to run
> - * @data: the data ptr for @fn()
> + * @fn: the function to run on each CPU in the core containing @cpu
> + * @data: the data ptr to pass to @fn()
> *
> - * Same as above, but instead of every CPU, only the logical CPUs of a
> - * single core are affected.
> + * Same as above, but instead of every CPU, only the logical CPUs of the
> + * single core containing @cpu are affected.
> *
> * Context: Must be called from within a cpus_read_lock() protected region.
> *
> - * Return: 0 if all executions of @fn returned 0, any non zero return
> - * value if any returned non zero.
> + * Return: 0 if all invocations of @fn return zero. Otherwise, the
> + * value returned by an arbitrarily chosen member of the set of calls to
> + * @fn that returned non-zero.
> */
> int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data);
>
Powered by blists - more mailing lists