[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1284116817.402.33.camel@laptop>
Date: Fri, 10 Sep 2010 13:06:57 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Heiko Carstens <heiko.carstens@...ibm.com>
Cc: Ingo Molnar <mingo@...e.hu>,
Venkatesh Pallipadi <venki@...gle.com>,
Suresh Siddha <suresh.b.siddha@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, Jens Axboe <axboe@...nel.dk>
Subject: Re: [PATCH] generic-ipi: fix deadlock in __smp_call_function_single
On Thu, 2010-09-09 at 15:50 +0200, Heiko Carstens wrote:
> From: Heiko Carstens <heiko.carstens@...ibm.com>
>
> Just got my 6 way machine to a state where cpu 0 is in an endless loop
> within __smp_call_function_single.
> All other cpus are idle.
>
> The call trace on cpu 0 looks like this:
>
> __smp_call_function_single
> scheduler_tick
> update_process_times
> tick_sched_timer
> __run_hrtimer
> hrtimer_interrupt
> clock_comparator_work
> do_extint
> ext_int_handler
> ----> timer irq
> cpu_idle
>
> __smp_call_function_single got called from nohz_balancer_kick (inlined)
> with the remote cpu being 1, wait being 0 and the per cpu variable
> remote_sched_softirq_cb (call_single_data) of the current cpu (0).
>
> Then it loops forever when it tries to grab the lock of the
> call_single_data, since it is already locked and enqueued on cpu 0.
>
> My theory how this could have happened: for some reason the scheduler
> decided to call __smp_call_function_single on it's own cpu, and sends
> an IPI to itself. The interrupt stays pending since IRQs are disabled.
> If then the hypervisor schedules the cpu away it might happen that upon
> rescheduling both the IPI and the timer IRQ are pending.
> If then interrupts are enabled again it depends which one gets scheduled
> first.
> If the timer interrupt gets delivered first we end up with the local
> deadlock as seen in the calltrace above.
>
> Let's make __smp_call_function_single check if the target cpu is the
> current cpu and execute the function immediately just like
> smp_call_function_single does. That should prevent at least the
> scenario described here.
>
> It might also be that the scheduler is not supposed to call
> __smp_call_function_single with the remote cpu being the current cpu,
> but that is a different issue.
>
> Signed-off-by: Heiko Carstens <heiko.carstens@...ibm.com>
Right, so it looks like all other users of __smp_call_function_single()
do indeed ensure not to call it on self, but your patch does make sense.
Acked-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
> ---
> kernel/smp.c | 14 ++++++++++++--
> 1 file changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/smp.c b/kernel/smp.c
> index 75c970c..f1427d8 100644
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -376,8 +376,10 @@ EXPORT_SYMBOL_GPL(smp_call_function_any);
> void __smp_call_function_single(int cpu, struct call_single_data *data,
> int wait)
> {
> - csd_lock(data);
> + unsigned int this_cpu;
> + unsigned long flags;
>
> + this_cpu = get_cpu();
> /*
> * Can deadlock when called with interrupts disabled.
> * We allow cpu's that are not yet online though, as no one else can
> @@ -387,7 +389,15 @@ void __smp_call_function_single(int cpu, struct call_single_data *data,
> WARN_ON_ONCE(cpu_online(smp_processor_id()) && wait && irqs_disabled()
> && !oops_in_progress);
>
> - generic_exec_single(cpu, data, wait);
> + if (cpu == this_cpu) {
> + local_irq_save(flags);
> + data->func(data->info);
> + local_irq_restore(flags);
> + } else {
> + csd_lock(data);
> + generic_exec_single(cpu, data, wait);
> + }
> + put_cpu();
> }
>
> /**
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists