lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 10 Feb 2012 19:58:03 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	"Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
Cc:	Sasha Levin <levinsasha928@...il.com>,
	Josh Boyer <jwboyer@...il.com>,
	"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...e.hu>,
	Thomas Gleixner <tglx@...utronix.de>,
	Avi Kivity <avi@...hat.com>, kvm <kvm@...r.kernel.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	x86 <x86@...nel.org>,
	Suresh B Siddha <suresh.b.siddha@...el.com>,
	Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
	Don Zickus <dzickus@...hat.com>
Subject: Re: WARNING: at arch/x86/kernel/smp.c:119
 native_smp_send_reschedule+0x25/0x43()

On Fri, 2012-02-10 at 15:36 +0530, Srivatsa S. Bhat wrote:
> >>> [   32.448626] ------------[ cut here ]------------
> >>> [   32.449160] WARNING: at arch/x86/kernel/smp.c:119 native_smp_send_reschedule+0x25/0x43()
> >>> [   32.449621] Pid: 1, comm: init_stage2 Not tainted 3.2.0+ #14
> >>> [   32.449621] Call Trace:
> >>> [   32.449621]  <IRQ>  [<ffffffff81041a44>] ? native_smp_send_reschedule+0x25/0x43
> >>> [   32.449621]  [<ffffffff810735b2>] warn_slowpath_common+0x7b/0x93
> >>> [   32.449621]  [<ffffffff810962cc>] ? tick_nohz_handler+0xc9/0xc9
> >>> [   32.449621]  [<ffffffff81073675>] warn_slowpath_null+0x15/0x18
> >>> [   32.449621]  [<ffffffff81041a44>] native_smp_send_reschedule+0x25/0x43
> >>> [   32.449621]  [<ffffffff81067a00>] smp_send_reschedule+0xa/0xc
> >>> [   32.449621]  [<ffffffff8106f25e>] scheduler_tick+0x21a/0x242
> >>> [   32.449621]  [<ffffffff8107da10>] update_process_times+0x62/0x73
> >>> [   32.449621]  [<ffffffff81096336>] tick_sched_timer+0x6a/0x8a
> >>> [   32.449621]  [<ffffffff8108c5eb>] __run_hrtimer.clone.26+0x55/0xcb
> >>> [   32.449621]  [<ffffffff8108cd77>] hrtimer_interrupt+0xcb/0x19b
> >>> [   32.449621]  [<ffffffff810428a8>] smp_apic_timer_interrupt+0x72/0x85
> >>> [   32.449621]  [<ffffffff8165a8de>] apic_timer_interrupt+0x6e/0x80
> >>> [   32.449621]  <EOI>  [<ffffffff8165928e>] ? _raw_spin_unlock_irqrestore+0x3a/0x3e
> >>> [   32.449621]  [<ffffffff81042f4e>] ? arch_local_irq_restore+0x6/0xd
> >>> [   32.449621]  [<ffffffff810430c4>] default_send_IPI_mask_allbutself_phys+0x78/0x88
> >>> [   32.449621]  [<ffffffff8106c3c4>] ? __migrate_task+0xf1/0xf1
> >>> [   32.449621]  [<ffffffff81045445>] physflat_send_IPI_allbutself+0x12/0x14
> >>> [   32.449621]  [<ffffffff81041aaf>] native_stop_other_cpus+0x4d/0xa8
> >>> [   32.449621]  [<ffffffff810411c6>] native_machine_shutdown+0x56/0x6d
> >>> [   32.449621]  [<ffffffff81048499>] kvm_shutdown+0x1a/0x1c
> >>> [   32.449621]  [<ffffffff810411f9>] machine_shutdown+0xa/0xc
> >>> [   32.449621]  [<ffffffff81041265>] native_machine_restart+0x20/0x32
> >>> [   32.449621]  [<ffffffff81041297>] machine_restart+0xa/0xc
> >>> [   32.449621]  [<ffffffff81081d53>] kernel_restart+0x49/0x4d
> >>> [   32.449621]  [<ffffffff81081f26>] sys_reboot+0x14b/0x18a
> >>> [   32.449621]  [<ffffffff81089937>] ? remove_wait_queue+0x4c/0x51
> >>> [   32.449621]  [<ffffffff8107637f>] ? do_wait+0x1a4/0x1e7
> >>> [   32.449621]  [<ffffffff8107735a>] ? sys_wait4+0xa8/0xbc
> >>> [   32.449621]  [<ffffffff8107522b>] ? clear_tsk_thread_flag+0xf/0xf
> >>> [   32.449621]  [<ffffffff81659a25>] ? async_page_fault+0x25/0x30
> >>> [   32.449621]  [<ffffffff81659e92>] system_call_fastpath+0x16/0x1b
> >>> [   32.449621] ---[ end trace d0f03651493fd3d6 ]-- 

OK, so a 'modern' kernel does it slightly different and I've no idea
what exactly goes wrong in your vintage version. But I can see the
current stuff going at it all wrong.

What seems to happen is that native_nmi_stop_other_cpus() NMI broadcasts
for smp_stop_nmi_callback()->stop_this_cpu(). Which without any
serialization what so ever marks all remote CPUs offline and calls halt
with IRQs disabled -> dead.

While we're waiting for this all to complete, the scheduler tries to
no_hz load-balance and kick a cpu it thinks is still around and we get
the above splat because the NMI just marked it offline without telling
anybody about it.

Now, arguably you don't want to go through the whole hotplug crap to
shut down your machine, esp not on panic, but clearing the online state
without telling anybody about it is bound to lead to these things.

No immediate solution comes to mind...

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ