lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 22 Apr 2022 16:00:32 -0400 From: Donghai Qiao <dqiao@...hat.com> To: akpm@...ux-foundation.org, sfr@...b.auug.org.au, arnd@...db.de, peterz@...radead.org, heying24@...wei.com, andriy.shevchenko@...ux.intel.com, axboe@...nel.dk, rdunlap@...radead.org, tglx@...utronix.de, gor@...ux.ibm.com Cc: donghai.w.qiao@...il.com, linux-kernel@...r.kernel.org, Donghai Qiao <dqiao@...hat.com> Subject: [PATCH v2 03/11] smp: eliminate SCF_WAIT and SCF_RUN_LOCAL The commit a32a4d8a815c ("smp: Run functions concurrently in smp_call_function_many_cond()") was to improve the concurrent of the cross call execution between local and remote. The change in smp_call_function_many_cond() did what was intended, but the new macro SCF_WAIT and SCF_RUN_LOCAL and the code around them to handle local call were not unnecessary because the modified function smp_call_function_many() was able to handle the local cross call. So these two macros can be eliminated and the code implemented around that can be removed as well. Also this patch fixed a issue of a comparison between an integer and a unsigned integer in smp_call_function_many_cond(). The changes with this patch and the changes made in subsequent patches will eventually help eliminate the set of on_each_cpu* functions. Signed-off-by: Donghai Qiao <dqiao@...hat.com> --- v1 -> v2: removed 'x' from the function names and change XCALL to SMP_CALL from the new macros kernel/smp.c | 32 +++++++------------------------- 1 file changed, 7 insertions(+), 25 deletions(-) diff --git a/kernel/smp.c b/kernel/smp.c index b1e6a8f77a9e..22e8f2bd770b 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -787,23 +787,13 @@ int smp_call_function_any(const struct cpumask *mask, } EXPORT_SYMBOL_GPL(smp_call_function_any); -/* - * Flags to be used as scf_flags argument of smp_call_function_many_cond(). - * - * %SCF_WAIT: Wait until function execution is completed - * %SCF_RUN_LOCAL: Run also locally if local cpu is set in cpumask - */ -#define SCF_WAIT (1U << 0) -#define SCF_RUN_LOCAL (1U << 1) - static void smp_call_function_many_cond(const struct cpumask *mask, smp_call_func_t func, void *info, - unsigned int scf_flags, + bool wait, smp_cond_func_t cond_func) { int cpu, last_cpu, this_cpu = smp_processor_id(); struct call_function_data *cfd; - bool wait = scf_flags & SCF_WAIT; bool run_remote = false; bool run_local = false; int nr_cpus = 0; @@ -829,14 +819,14 @@ static void smp_call_function_many_cond(const struct cpumask *mask, WARN_ON_ONCE(!in_task()); /* Check if we need local execution. */ - if ((scf_flags & SCF_RUN_LOCAL) && cpumask_test_cpu(this_cpu, mask)) + if (cpumask_test_cpu(this_cpu, mask)) run_local = true; /* Check if we need remote execution, i.e., any CPU excluding this one. */ cpu = cpumask_first_and(mask, cpu_online_mask); if (cpu == this_cpu) cpu = cpumask_next_and(cpu, mask, cpu_online_mask); - if (cpu < nr_cpu_ids) + if ((unsigned int)cpu < nr_cpu_ids) run_remote = true; if (run_remote) { @@ -911,12 +901,8 @@ static void smp_call_function_many_cond(const struct cpumask *mask, * @mask: The set of cpus to run on (only runs on online subset). * @func: The function to run. This must be fast and non-blocking. * @info: An arbitrary pointer to pass to the function. - * @wait: Bitmask that controls the operation. If %SCF_WAIT is set, wait - * (atomically) until function has completed on other CPUs. If - * %SCF_RUN_LOCAL is set, the function will also be run locally - * if the local CPU is set in the @cpumask. - * - * If @wait is true, then returns once @func has returned. + * @wait: If wait is true, the call will not return until func() + * has completed on other CPUs. * * You must not call this function with disabled interrupts or from a * hardware interrupt handler or from a bottom half handler. Preemption @@ -925,7 +911,7 @@ static void smp_call_function_many_cond(const struct cpumask *mask, void smp_call_function_many(const struct cpumask *mask, smp_call_func_t func, void *info, bool wait) { - smp_call_function_many_cond(mask, func, info, wait * SCF_WAIT, NULL); + smp_call_function_many_cond(mask, func, info, wait, NULL); } EXPORT_SYMBOL(smp_call_function_many); @@ -1061,13 +1047,9 @@ void __init smp_init(void) void on_each_cpu_cond_mask(smp_cond_func_t cond_func, smp_call_func_t func, void *info, bool wait, const struct cpumask *mask) { - unsigned int scf_flags = SCF_RUN_LOCAL; - - if (wait) - scf_flags |= SCF_WAIT; preempt_disable(); - smp_call_function_many_cond(mask, func, info, scf_flags, cond_func); + smp_call_function_many_cond(mask, func, info, wait, cond_func); preempt_enable(); } EXPORT_SYMBOL(on_each_cpu_cond_mask); -- 2.27.0
Powered by blists - more mailing lists