[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AE387DFD-770B-47EB-AF85-4AB8950D8ABF@vmware.com>
Date: Tue, 16 Feb 2021 18:49:28 +0000
From: Nadav Amit <namit@...are.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Rik van Riel <riel@...riel.com>,
Josh Poimboeuf <jpoimboe@...hat.com>
Subject: Re: [PATCH v5 1/8] smp: Run functions concurrently in
smp_call_function_many_cond()
> On Feb 16, 2021, at 4:04 AM, Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Tue, Feb 09, 2021 at 02:16:46PM -0800, Nadav Amit wrote:
>> @@ -894,17 +911,12 @@ EXPORT_SYMBOL(on_each_cpu_mask);
>> void on_each_cpu_cond_mask(smp_cond_func_t cond_func, smp_call_func_t func,
>> void *info, bool wait, const struct cpumask *mask)
>> {
>> - int cpu = get_cpu();
>> + unsigned int scf_flags = SCF_RUN_LOCAL;
>>
>> - smp_call_function_many_cond(mask, func, info, wait, cond_func);
>> - if (cpumask_test_cpu(cpu, mask) && cond_func(cpu, info)) {
>> - unsigned long flags;
>> + if (wait)
>> + scf_flags |= SCF_WAIT;
>>
>> - local_irq_save(flags);
>> - func(info);
>> - local_irq_restore(flags);
>> - }
>> - put_cpu();
>> + smp_call_function_many_cond(mask, func, info, scf_flags, cond_func);
>> }
>> EXPORT_SYMBOL(on_each_cpu_cond_mask);
>
> You lost the preempt_disable() there, I've added it back:
>
> ---
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -920,7 +920,9 @@ void on_each_cpu_cond_mask(smp_cond_func
> if (wait)
> scf_flags |= SCF_WAIT;
>
> + preempt_disable();
> smp_call_function_many_cond(mask, func, info, scf_flags, cond_func);
> + preempt_enable();
> }
> EXPORT_SYMBOL(on_each_cpu_cond_mask);
Indeed. I will add lockdep_assert_preemption_disabled() to
smp_call_function_many_cond() to prevent this mistake from reoccurring.
Powered by blists - more mailing lists