lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 10 Sep 2010 13:23:07 +0200
From:	Jens Axboe <axboe@...nel.dk>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	Heiko Carstens <heiko.carstens@...ibm.com>,
	Ingo Molnar <mingo@...e.hu>,
	Venkatesh Pallipadi <venki@...gle.com>,
	Suresh Siddha <suresh.b.siddha@...el.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] generic-ipi: fix deadlock in __smp_call_function_single

On 2010-09-10 13:06, Peter Zijlstra wrote:
> On Thu, 2010-09-09 at 15:50 +0200, Heiko Carstens wrote:
>> From: Heiko Carstens <heiko.carstens@...ibm.com>
>>
>> Just got my 6 way machine to a state where cpu 0 is in an endless loop
>> within __smp_call_function_single.
>> All other cpus are idle.
>>
>> The call trace on cpu 0 looks like this:
>>
>> __smp_call_function_single
>> scheduler_tick
>> update_process_times
>> tick_sched_timer
>> __run_hrtimer
>> hrtimer_interrupt
>> clock_comparator_work
>> do_extint
>> ext_int_handler
>> ----> timer irq
>> cpu_idle
>>
>> __smp_call_function_single got called from nohz_balancer_kick (inlined)
>> with the remote cpu being 1, wait being 0 and the per cpu variable
>> remote_sched_softirq_cb (call_single_data) of the current cpu (0).
>>
>> Then it loops forever when it tries to grab the lock of the
>> call_single_data, since it is already locked and enqueued on cpu 0.
>>
>> My theory how this could have happened: for some reason the scheduler
>> decided to call __smp_call_function_single on it's own cpu, and sends
>> an IPI to itself. The interrupt stays pending since IRQs are disabled.
>> If then the hypervisor schedules the cpu away it might happen that upon
>> rescheduling both the IPI and the timer IRQ are pending.
>> If then interrupts are enabled again it depends which one gets scheduled
>> first.
>> If the timer interrupt gets delivered first we end up with the local
>> deadlock as seen in the calltrace above.
>>
>> Let's make __smp_call_function_single check if the target cpu is the
>> current cpu and execute the function immediately just like
>> smp_call_function_single does. That should prevent at least the
>> scenario described here.
>>
>> It might also be that the scheduler is not supposed to call
>> __smp_call_function_single with the remote cpu being the current cpu,
>> but that is a different issue.
>>
>> Signed-off-by: Heiko Carstens <heiko.carstens@...ibm.com>
> 
> Right, so it looks like all other users of __smp_call_function_single()
> do indeed ensure not to call it on self, but your patch does make sense.

I guess it depends on whether how bullet proof you want that (core) API
to be. We've traditionally had this kind of support in similar functions
so the caller doesn't have to check, so I guess the patch is fine with
me too.

For extra credit, the function documentation should be modified as well:

 * __smp_call_function_single(): Run a function on another CPU

Acked-by: Jens Axboe <jaxboe@...ionio.com>

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ