[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 5 May 2008 11:04:43 -0500
From: Robin Holt <holt@....com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...urebad.de>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Ingo Molnar <mingo@...e.hu>, Andi Kleen <andi@...stfloor.org>,
Yinghai Lu <yhlu.kernel@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Yasunori Goto <y-goto@...fujitsu.com>
Subject: Re: [rfc][patch 0/3] bootmem2: a memory block-oriented boot time
allocator
On Mon, May 05, 2008 at 08:23:34AM -0700, Linus Torvalds wrote:
>
>
> On Mon, 5 May 2008, Johannes Weiner wrote:
> >
> > here is a bootmem allocator replacement that uses one bitmap for all
> > available pages and works with a model of contiguous memory blocks
> > that reside on nodes instead of nodes only as the current allocator
> > does.
>
> Won't this have problems with huge non-contiguous areas?
>
> Some setups have traditionally had node memory separated in physical space
> by the high bits of the memory address, and using a single bitmap for such
> things would potentially be basically impossible - even with a single bit
> per page, the "span" of possible pages is potentially just too high, even
> if the nodes themselves don't have tons of memory, because the memory is
> just very spread out - and allocating the initial bitmap may not work
> reliably.
>
> Now, admittedly I don't know if we even support that kind of thing or if
> people really do things that way any more, so maybe it's not an issue.
SGI sn2 architecture does. Each DIMM bank is allocated a 16GB range
of physical addresses. There are up to four banks per node. The node
number is stuck into higher portions of the address, giving a gap between
nodes of 256GB. With a potential of 1024 nodes, you would have a very
large array.
Additionally on our upcoming UV systems, there will potentially be a
hole between the bulk of memory and a small amount addressable at the
high end of the address range (slightly short of 16TB) with the typical
gap being on the order of 15TB.
Thanks,
Robin Holt
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists