[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200704011246.52238.ak@suse.de>
Date: Sun, 1 Apr 2007 12:46:51 +0200
From: Andi Kleen <ak@...e.de>
To: Christoph Lameter <clameter@....com>
Cc: linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
Martin Bligh <mbligh@...gle.com>, linux-mm@...ck.org,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Subject: Re: [PATCH 1/4] x86_64: Switch to SPARSE_VIRTUAL
On Sunday 01 April 2007 09:10, Christoph Lameter wrote:
> x86_64 make SPARSE_VIRTUAL the default
>
> x86_64 is using 2M page table entries to map its 1-1 kernel space.
> We implement the virtual memmap also using 2M page table entries.
> So there is no difference at all to FLATMEM. Both schemes require
> a page table and a TLB.
Hmm, this means there is at least 2MB worth of struct page on every node?
Or do you have overlaps with other memory (I think you have)
In that case you have to handle the overlap in change_page_attr()
Also your "generic" vmemmap code doesn't look very generic, but
rather x86 specific. I didn't think huge pages could be easily
set up this way in many other architectures.
And when you reserve virtual space somewhere you should
update Documentation/x86_64/mm.txt. Also you didn't adjust
the end of the vmalloc area so in theory vmalloc could run
into your vmemmap.
> Thus the SPARSEMEM becomes the most efficient way of handling
> virt_to_page, pfn_to_page and friends for UP, SMP and NUMA.
Do you have any benchmarks numbers to prove it? There seem to be a few
benchmarks where the discontig virt_to_page is a problem
(although I know ways to make it more efficient), and sparsemem
is normally slower. Still some numbers would be good.
-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists