[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAE9FiQWuC+34=q1K5oj8j+U1XjbXPDz5k1VKvs4vnoZDAzFzRg@mail.gmail.com>
Date: Wed, 19 Dec 2012 15:40:38 -0800
From: Yinghai Lu <yinghai@...nel.org>
To: "H. Peter Anvin" <hpa@...or.com>
Cc: Borislav Petkov <bp@...en8.de>, Jacob Shin <jacob.shin@....com>,
"H. Peter Anvin" <hpa@...ux.intel.com>,
"Yu, Fenghua" <fenghua.yu@...el.com>,
"mingo@...nel.org" <mingo@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"linux-tip-commits@...r.kernel.org"
<linux-tip-commits@...r.kernel.org>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Stefano Stabellini <Stefano.Stabellini@...citrix.com>
Subject: Re: [tip:x86/microcode] x86/microcode_intel_early.c: Early update
ucode on Intel's CPU
On Wed, Dec 19, 2012 at 3:22 PM, H. Peter Anvin <hpa@...or.com> wrote:
> The other bit is that building the real kernel page tables iteratively
> (ignoring the early page tables here) is safer, since the real page
> table builder is fully aware of the memory map. This means any
> "spillover" from the early page tables gets minimized to regions where
> there are data objects that have to be accessed early. Since Yinghai
> already had iterative page table building working, I don't see any
> reason to not use that capability.
that is v6, right?
including that patch
---
Subject: [PATCH] x86, 64bit: Set extra ident mapping for whole kernel range
Current when kernel is loaded above 1G, only [_text, _text+2M] is set
up with extra ident page table.
That is not enough, some variables that could be used early are out of
that range, like BRK for early page table.
Need to set map for [_text, _end] include text/data/bss/brk...
Also current kernel is not allowed to be loaded above 512g, it thinks
that address is too big.
We need to add one extra spare page for level3 to point that 512g range.
Need to check _text range and set level4 pg with that spare level3 page,
and set level3 with level2 page to cover [_text, _end] with extra mapping.
At last, to handle crossing GB boundary, we need to add another
level2 spare page. To handle crossing 512GB boundary, we need to
add another level3 spare page to next 512G range.
Test on with kexec-tools with local test code to force loading kernel
cross 1G, 5G, 512g, 513g.
We need this to put relocatable 64bit bzImage high above 1g.
-v4: add crossing GB boundary handling.
-v5: use spare pages from BRK, so could save pages when kernel is not
loaded above 1GB.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists