[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1175543696.22373.51.camel@localhost.localdomain>
Date: Mon, 02 Apr 2007 12:54:56 -0700
From: Dave Hansen <hansendc@...ibm.com>
To: Christoph Lameter <clameter@....com>
Cc: Andi Kleen <ak@...e.de>, linux-kernel@...r.kernel.org,
linux-arch@...r.kernel.org, Martin Bligh <mbligh@...gle.com>,
linux-mm@...ck.org,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Subject: Re: [PATCH 1/4] x86_64: Switch to SPARSE_VIRTUAL
On Mon, 2007-04-02 at 08:54 -0700, Christoph Lameter wrote:
> > BTW there is no guarantee the node size is a multiple of 128MB so
> > you likely need to handle the overlap case. Otherwise we can
> > get cache corruptions
>
> How does sparsemem handle that?
It doesn't. :)
In practice, this situation never happens because we don't have any
actual architectures that have any node boundaries on less than
MAX_ORDER, and the section size is at least MAX_ORDER. If we *did* have
this, then the page allocator would already be broken for these
nodes. ;)
So, this SPARSE_VIRTUAL does introduce a new dependency, which Andi
calculated above. But, in reality, I don't think it's a big deal. Just
to spell it out a bit more, if this:
VMEMMAP_MAPPING_SIZE/sizeof(struct page) * PAGE_SIZE
(where VMEMMAP_MAPPING_SIZE is PMD_SIZE in your case) is any larger than
the granularity on which your NUMA nodes are divided, then you might
have a problem with mem_map for one NUMA node getting allocated on
another.
It might be worth a comment, or at least some kind of WARN_ON().
Perhaps we can stick something in online_page() to check if:
page_to_nid(page) == page_to_nid(virt_to_page(page))
-- Dave
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists