[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190718160601.GP3402@hirez.programming.kicks-ass.net>
Date: Thu, 18 Jul 2019 18:06:01 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: luferry <luferry@....com>, Rik van Riel <riel@...riel.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: Re: [PATCH v2] smp: avoid generic_exec_single cause system lockup
On Thu, Jul 18, 2019 at 11:58:47AM +0200, Thomas Gleixner wrote:
> Subject: smp: Warn on function calls from softirq context
> From: Thomas Gleixner <tglx@...utronix.de>
> Date: Thu, 18 Jul 2019 11:20:09 +0200
>
> It's clearly documented that smp function calls cannot be invoked from
> softirq handling context. Unfortunately nothing enforces that or emits a
> warning.
>
> A single function call can be invoked from softirq context only via
> smp_call_function_single_async().
>
> Reported-by: luferry <luferry@....com>
> Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
> ---
> kernel/smp.c | 16 ++++++++++++++++
> 1 file changed, 16 insertions(+)
>
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -291,6 +291,15 @@ int smp_call_function_single(int cpu, sm
> WARN_ON_ONCE(cpu_online(this_cpu) && irqs_disabled()
> && !oops_in_progress);
>
> + /*
> + * Can deadlock when the softirq is executed on return from
> + * interrupt and the interrupt hit between llist_add() and
> + * arch_send_call_function_single_ipi() because then this
> + * invocation sees the list non-empty, skips the IPI send
> + * and waits forever.
> + */
> + WARN_ON_ONCE(is_serving_softirq() && wait);
> +
> csd = &csd_stack;
> if (!wait) {
> csd = this_cpu_ptr(&csd_data);
> @@ -416,6 +425,13 @@ void smp_call_function_many(const struct
> WARN_ON_ONCE(cpu_online(this_cpu) && irqs_disabled()
> && !oops_in_progress && !early_boot_irqs_disabled);
>
> + /*
> + * Bottom half handlers are not allowed to call this as they might
> + * corrupt cfd_data when the interrupt which triggered softirq
> + * processing hit this function.
> + */
> + WARN_ON_ONCE(is_serving_softirq());
> +
> /* Try to fastpath. So, what's a CPU they want? Ignoring this one. */
> cpu = cpumask_first_and(mask, cpu_online_mask);
> if (cpu == this_cpu)
As we discussed on IRC, it is worse, we can only use these functions
from task/process context. We need something like the below.
I've build a kernel with this applied and nothing went *splat*.
diff --git a/kernel/smp.c b/kernel/smp.c
index 616d4d114847..7dbcb402c2fc 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -291,6 +291,14 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
WARN_ON_ONCE(cpu_online(this_cpu) && irqs_disabled()
&& !oops_in_progress);
+ /*
+ * When @wait we can deadlock when we interrupt between llist_add() and
+ * arch_send_call_function_ipi*(); when !@...t we can deadlock due to
+ * csd_lock() on because the interrupt context uses the same csd
+ * storage.
+ */
+ WARN_ON_ONCE(!in_task());
+
csd = &csd_stack;
if (!wait) {
csd = this_cpu_ptr(&csd_data);
@@ -416,6 +424,14 @@ void smp_call_function_many(const struct cpumask *mask,
WARN_ON_ONCE(cpu_online(this_cpu) && irqs_disabled()
&& !oops_in_progress && !early_boot_irqs_disabled);
+ /*
+ * When @wait we can deadlock when we interrupt between llist_add() and
+ * arch_send_call_function_ipi*(); when !@...t we can deadlock due to
+ * csd_lock() on because the interrupt context uses the same csd
+ * storage.
+ */
+ WARN_ON_ONCE(!in_task());
+
/* Try to fastpath. So, what's a CPU they want? Ignoring this one. */
cpu = cpumask_first_and(mask, cpu_online_mask);
if (cpu == this_cpu)
Powered by blists - more mailing lists