[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFxttmjXJX4wtwSpTOdDCMwpSopAsL3MJ1n8mJSJOEpyhQ@mail.gmail.com>
Date: Tue, 18 Nov 2014 09:20:02 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Dave Jones <davej@...hat.com>,
Linux Kernel <linux-kernel@...r.kernel.org>,
"the arch/x86 maintainers" <x86@...nel.org>,
Don Zickus <dzickus@...hat.com>
Subject: Re: frequent lockups in 3.18rc4
On Tue, Nov 18, 2014 at 6:52 AM, Dave Jones <davej@...hat.com> wrote:
>
> Here's the first hit. Curiously, one cpu is missing.
That might be the CPU3 that isn't responding to IPIs due to some bug..
> NMI watchdog: BUG: soft lockup - CPU#1 stuck for 23s! [trinity-c180:17837]
> RIP: 0010:[<ffffffffa91a0db0>] [<ffffffffa91a0db0>] bad_range+0x0/0x90
Hmm. Something looping in the page allocator? Not waiting for a lock,
but livelocked? I'm not seeing anything here that should trigger the
NMI watchdog at all.
Can the NMI watchdog get confused somehow?
> Call Trace:
> [<ffffffffa91a6030>] __alloc_pages_nodemask+0x230/0xd20
> [<ffffffffa91f400e>] alloc_pages_vma+0xee/0x1b0
> [<ffffffffa91b643e>] shmem_alloc_page+0x6e/0xc0
> [<ffffffffa91b8ff0>] shmem_getpage_gfp+0x630/0xa40
> [<ffffffffa91b9442>] shmem_write_begin+0x42/0x70
> [<ffffffffa919a684>] generic_perform_write+0xd4/0x1f0
> [<ffffffffa919d5d2>] __generic_file_write_iter+0x162/0x350
> [<ffffffffa919d7ff>] generic_file_write_iter+0x3f/0xb0
> [<ffffffffa92155e8>] do_iter_readv_writev+0x78/0xc0
> [<ffffffffa9216e18>] do_readv_writev+0xd8/0x2a0
> [<ffffffffa90cf426>] ? lock_release_holdtime.part.28+0xe6/0x160
> [<ffffffffa921706c>] vfs_writev+0x3c/0x50
And CPU2 is in that TLB flusher again:
> NMI backtrace for cpu 2
> RIP: 0010:[<ffffffffa9116dbe>] [<ffffffffa9116dbe>] generic_exec_single+0xee/0x1a0
> Call Trace:
> [<ffffffffa9045bb0>] ? do_flush_tlb_all+0x60/0x60
> [<ffffffffa9116f3a>] smp_call_function_single+0x6a/0xe0
> [<ffffffffa9117679>] smp_call_function_many+0x2b9/0x320
> [<ffffffffa9046370>] flush_tlb_mm_range+0xe0/0x370
> [<ffffffffa91cc762>] tlb_flush_mmu_tlbonly+0x42/0x50
> [<ffffffffa91cdd28>] unmap_single_vma+0x6b8/0x900
> [<ffffffffa91ce06c>] zap_page_range_single+0xfc/0x160
> [<ffffffffa91ce254>] unmap_mapping_range+0x134/0x190
.. and the code line implies that it's in that csd_lock_wait() loop,
again consistent with waiting for some other CPU. Presumably the
missing CPU3.
> NMI backtrace for cpu 0
> RIP: 0010:[<ffffffffa90ac450>] [<ffffffffa90ac450>] preempt_count_add+0x0/0xc0
> Call Trace:
> [<ffffffffa96bbb45>] cpuidle_enter_state+0x55/0x300
> [<ffffffffa96bbea7>] cpuidle_enter+0x17/0x20
> [<ffffffffa90c88f5>] cpu_startup_entry+0x4e5/0x630
> [<ffffffffa902d523>] start_secondary+0x1a3/0x220
And CPU0 is just in the idle loop (that RIP is literally the
instruction after the "mwait" according to the code line).
> INFO: NMI handler (arch_trigger_all_cpu_backtrace_handler) took too long to run: 125.739 msecs
.. and that's us giving up on CPU3.
So it does look like CPU3 is the problem, but sadly, CPU3 is
apparently not listening, and doesn't even react to the NMI, much less
a TLB flush IPI.
Not reacting to NMI could be:
(a) some APIC state issue
(b) we're already stuck in a loop in the previous NMI handler
(c) what?
Anybody?
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists