[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <551d9535.87628c0a.5324.7358@mx.google.com>
Date: Thu, 02 Apr 2015 12:15:01 -0700 (PDT)
From: Yasuaki Ishimatsu <yasu.isimatu@...il.com>
To: Dave Young <dyoung@...hat.com>
Cc: Xishi Qiu <qiuxishi@...wei.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, tglx@...utronix.de, bhe@...hat.com,
mingo@...hat.com, hpa@...or.com, akpm@...ux-foundation.org
Subject: Re: [PATCH] x86/numa: kernel stack corruption fix
On Wed, 1 Apr 2015 15:41:20 +0800
Dave Young <dyoung@...hat.com> wrote:
> On 04/01/15 at 03:27pm, Xishi Qiu wrote:
> > On 2015/4/1 13:11, Dave Young wrote:
> >
> > > Ccing Xishi Qiu who wrote the clear_kernel_node_hotplug code.
> > >
> > > On 04/01/15 at 12:53pm, Dave Young wrote:
> > >> I got below kernel panic during kdump test on Thinkpad T420 laptop:
> > >>
> > >> [ 0.000000] No NUMA configuration found
> > >> [ 0.000000] Faking a node at [mem 0x0000000000000000-0x0000000037ba4fff]
> > >> [ 0.000000] Kernel panic - not syncing: stack-protector: Kernel stack is cor
> > >> upted in: ffffffff81d21910 r
> > >> [ 0.000000]
> > >> [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.0.0-rc6+ #44
> > >> [ 0.000000] Hardware name: LENOVO 4236NUC/4236NUC, BIOS 83ET76WW (1.46 ) 07/
> > >> 5/2013 0
> > >> [ 0.000000] 0000000000000000 c70296ddd809e4f6 ffffffff81b67ce8 ffffffff817c
> > >> a26 2
> > >> [ 0.000000] 0000000000000000 ffffffff81a61c90 ffffffff81b67d68 ffffffff817b
> > >> 8d2 c
> > >> [ 0.000000] 0000000000000010 ffffffff81b67d78 ffffffff81b67d18 c70296ddd809
> > >> 4f6 e
> > >> [ 0.000000] Call Trace:
> > >> [ 0.000000] [<ffffffff817c2a26>] dump_stack+0x45/0x57
> > >> [ 0.000000] [<ffffffff817bc8d2>] panic+0xd0/0x204
> > >> [ 0.000000] [<ffffffff81d21910>] ? numa_clear_kernel_node_hotplug+0xe6/0xf2
> > >> [ 0.000000] [<ffffffff8107741b>] __stack_chk_fail+0x1b/0x20
> > >> [ 0.000000] [<ffffffff81d21910>] numa_clear_kernel_node_hotplug+0xe6/0xf2
> > >> [ 0.000000] [<ffffffff81d21e5d>] numa_init+0x1a5/0x520
> > >> [ 0.000000] [<ffffffff81d222b1>] x86_numa_init+0x19/0x3d
> > >> [ 0.000000] [<ffffffff81d22460>] initmem_init+0x9/0xb
> > >> [ 0.000000] [<ffffffff81d0d00c>] setup_arch+0x94f/0xc82
> > >> [ 0.000000] [<ffffffff81d05120>] ? early_idt_handlers+0x120/0x120
> > >> [ 0.000000] [<ffffffff817bd0bb>] ? printk+0x55/0x6b
> > >> [ 0.000000] [<ffffffff81d05120>] ? early_idt_handlers+0x120/0x120
> > >> [ 0.000000] [<ffffffff81d05d9b>] start_kernel+0xe8/0x4d6
> > >> [ 0.000000] [<ffffffff81d05120>] ? early_idt_handlers+0x120/0x120
> > >> [ 0.000000] [<ffffffff81d05120>] ? early_idt_handlers+0x120/0x120
> > >> [ 0.000000] [<ffffffff81d055ee>] x86_64_start_reservations+0x2a/0x2c
> > >> [ 0.000000] [<ffffffff81d05751>] x86_64_start_kernel+0x161/0x184
> > >> [ 0.000000] ---[ end Kernel panic - not syncing: stack-protector: Kernel sta
> > >> k is corrupted in: ffffffff81d21910 c
> > >> [ 0.000000]
> > >> PANIC: early exception 0d rip 10:ffffffff8105d2a6 error 7eb cr2 ffff8800371dd00
> > >> [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.0.0-rc6+ #44 0
> > >> [ 0.000000] Hardware name: LENOVO 4236NUC/4236NUC, BIOS 83ET76WW (1.46 ) 07/
> > >> 5/2013 0
> > >> [ 0.000000] 0000000000000000 c70296ddd809e4f6 ffffffff81b67c60 ffffffff817c
> > >> a26 2
> > >> [ 0.000000] 0000000000000096 ffffffff81a61c90 ffffffff81b67d68 fffffff00000
> > >> 084 0000000000000a0d 0000000000000a00 0
> > >> [ 0.000000] Call Trace:
> > >> [ 0.000000] [<ffffffff817c2a26>] dump_stack+0x45/0x57
> > >> [ 0.000000] [<ffffffff81d051b0>] early_idt_handler+0x90/0xb7
> > >> [ 0.000000] [<ffffffff8105d2a6>] ? native_irq_enable+0x6/0x10
> > >> [ 0.000000] [<ffffffff817bc9c5>] ? panic+0x1c3/0x204
> > >> [ 0.000000] [<ffffffff81d21910>] ? numa_clear_kernel_node_hotplug+0xe6/0xf2
> > >> [ 0.000000] [<ffffffff8107741b>] __stack_chk_fail+0x1b/0x20
> > >> [ 0.000000] [<ffffffff81d21910>] numa_clear_kernel_node_hotplug+0xe6/0xf2
> > >> [ 0.000000] [<ffffffff81d21e5d>] numa_init+0x1a5/0x520
> > >> [ 0.000000] [<ffffffff81d222b1>] x86_numa_init+0x19/0x3d
> > >> [ 0.000000] [<ffffffff81d22460>] initmem_init+0x9/0xb
> > >> [ 0.000000] [<ffffffff81d0d00c>] setup_arch+0x94f/0xc82
> > >> [ 0.000000] [<ffffffff81d05120>] ? early_idt_handlers+0x120/0x120
> > >> [ 0.000000] [<ffffffff817bd0bb>] ? printk+0x55/0x6b
> > >> [ 0.000000] [<ffffffff81d05120>] ? early_idt_handlers+0x120/0x120
> > >> [ 0.000000] [<ffffffff81d05d9b>] start_kernel+0xe8/0x4d6
> > >> [ 0.000000] [<ffffffff81d05120>] ? early_idt_handlers+0x120/0x120
> > >> [ 0.000000] [<ffffffff81d05120>] ? early_idt_handlers+0x120/0x120
> > >> [ 0.000000] [<ffffffff81d055ee>] x86_64_start_reservations+0x2a/0x2c
> > >> [ 0.000000] [<ffffffff81d05751>] x86_64_start_kernel+0x161/0x184
> > >> [ 0.000000] RIP 0x46
> > >>
> > >> This is caused by writing over end of numa mask bitmap.
> > >>
> > >> numa_clear_kernel_node try to set node id in a mask bitmap, it iterating all
> > >> reserved region and assume every regions have valid nid. It is not true because
> > >> There's an exception for graphic memory quirks. see function trim_snb_memory
> > >> in arch/x86/kernel/setup.c
> > >>
> > >> It is easily to reproduce the bug in kdump kernel because kdump kernel use
> > >> prereserved memory instead of whole memory, but kexec pass other reserved memory
> > >> ranges to 2nd kernel as well. like below in my test:
> > >> kdump kernel ram 0x2d000000 - 0x37bfffff
> > >> One of the reserved regions: 0x40000000 - 0x40100000
> > >>
> > >> The above reserved region includes 0x40004000, a page excluded in
> > >> trim_snb_memory. For this memblock reserved region the nid is not set it is
> > >> still default value MAX_NUMNODES. later node_set callback will set bit
> > >> MAX_NUMNODES in nodemask bitmap thus stack corruption happen.
> > >>
> >
> > Hi Dave,
> >
> > Is it means, first reserved region 0x40000000 - 0x40100000, then boot the kdump
> > kernel, so this region is not include in "numa_meminfo", and memblock.reserved
> > (0x40004000) is still MAX_NUMNODES from trim_snb_memory().
>
> Right, btw, I booted kdump kernel with numa=off for saving memory.
>
> I suspect it will also be reproduced with mem=XYZ with normal kernel.
Does the issue occur on your system with mem=0x40000000?
I think the issue occurs when reserved memory range is not includes
in system ram which informed by e820 or SRAT table. On your system,
0x40004000 is reserved by trim_snb_memory(). But if you use mem=0x40000000,
the system ram is limited within 0x40000000. So the issue will occur.
Thanks,
Yasuaki Ishimatsu
>
> >
> > numa_clear_kernel_node_hotplug
> > {
> > ...
> > for (i = 0; i < numa_meminfo.nr_blks; i++) {
> > struct numa_memblk *mb = &numa_meminfo.blk[i];
> >
> > memblock_set_node(mb->start, mb->end - mb->start,
> > &memblock.reserved, mb->nid); // this will not reset 0x40004000's node, right?
> > }
> > ...
> > }
> >
> > Thanks
> > Xishi Qiu
> >
> > >> Fixing this by adding a check, do not call node_set in case nid is MAX_NUMNODES.
> > >>
> > >> Signed-off-by: Dave Young <dyoung@...hat.com>
> > >> ---
> > >> arch/x86/mm/numa.c | 3 ++-
> > >> 1 file changed, 2 insertions(+), 1 deletion(-)
> > >>
> > >> --- linux.orig/arch/x86/mm/numa.c
> > >> +++ linux/arch/x86/mm/numa.c
> > >> @@ -484,7 +484,8 @@ static void __init numa_clear_kernel_nod
> > >>
> > >> /* Mark all kernel nodes. */
> > >> for_each_memblock(reserved, r)
> > >> - node_set(r->nid, numa_kernel_nodes);
> > >> + if (r->nid != MAX_NUMNODES)
> > >> + node_set(r->nid, numa_kernel_nodes);
> > >>
> > >> /* Clear MEMBLOCK_HOTPLUG flag for memory in kernel nodes. */
> > >> for (i = 0; i < numa_meminfo.nr_blks; i++) {
> > >>
> > >
> > > .
> > >
> >
> >
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists