[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m11wi8qflo.fsf@ebiederm.dsl.xmission.com>
Date: Wed, 25 Apr 2007 16:00:35 -0600
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Jeremy Fitzhardinge <jeremy@...p.org>
Cc: Chuck Ebbert <cebbert@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, Andi Kleen <ak@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
virtualization@...ts.osdl.org, lkml <linux-kernel@...r.kernel.org>,
Zachary Amsden <zach@...are.com>,
Chris Wright <chrisw@...s-sol.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH 10/28] i386: map enough initial memory to create lowmem mappings
Jeremy Fitzhardinge <jeremy@...p.org> writes:
> Eric W. Biederman wrote:
>> Jeremy did your kernel have PAE enabled?
>>
>> It just occurred to me that we have at all of the memory below 1M (say about
>> 512K) mapped and available to setup new mappings.
>>
>> The only way I can see a page fault happening is if you were using a PAE
>> enabled kernel (so you were not updating the current page tables) and
>> you have more than 256M of low memory, and we don't get any much extra
>> from always mapping 4M at a time.
>
> Yes, that's the situation. PAE enabled, ~768MB of memory and a large
> kernel which mostly fills the 8M mapping.
And it just occurred to me PSE disabled, otherwise you would not have
needed more than 4 pages. I supposed you were testing the Xen case.
That said I know for certain now why we have never seen this in
production because all cpus that fit this configuration also support
PSE.
So the way I think we should properly fix this is:
- Enable PAE early so we are updating our current page table.
- Pass a limit to the bootmem allocator so we do not get pages
pass the point where we have setup page tables.
- Handle memory allocation failures.
However only the first part is necessary to solve the problem,
in practice even if you are using 4K pages.
Eric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists