[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52E1E174.9040107@ti.com>
Date: Thu, 23 Jan 2014 22:43:48 -0500
From: Santosh Shilimkar <santosh.shilimkar@...com>
To: Dave Hansen <dave.hansen@...el.com>
CC: "Strashko, Grygorii" <grygorii.strashko@...com>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Yinghai Lu <yinghai@...nel.org>, Tejun Heo <tj@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: Panic on 8-node system in memblock_virt_alloc_try_nid()
Dave,
On Thursday 23 January 2014 05:49 PM, Dave Hansen wrote:
> Linus's current tree doesn't boot on an 8-node/1TB NUMA system that I
> have. Its reboots are *LONG*, so I haven't fully bisected it, but it's
> down to a just a few commits, most of which are changes to the memblock
> code. Since the panic is in the memblock code, it looks like a
> no-brainer. It's almost certainly the code from Santosh or Grygorii
> that's triggering this.
>
> Config and good/bad dmesg with memblock=debug are here:
>
> http://sr71.net/~dave/intel/3.13/
>
> Please let me know if you need it bisected further than this.
>
Thanks a lot for debug information. Its pretty useful. The oops
seems to be actually side effect of not setting up the numa nodes
correctly first place. At least the setup_node_data() results
indicate that. Actually setup_node_data() operates on the physical
memblock interfaces which are untouched except the alignment change
and thats potentially reason for the change in behavior.
Will you be able revert below commit and give a quick try to see
if the behavior changes ? It might impact other APIs since they
assume the default alignment as SMP_CACHE_BYTES but at least
I want to see if with below revert at least setup_node_data()
reserves correct memory space.
79f40fa mm/memblock: drop WARN and use SMP_CACHE_BYTES as a default alignment
Regards,
Santosh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists