[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <24bebd32-2056-dd5a-8b77-d2a9572dc512@infradead.org>
Date: Thu, 8 Dec 2016 10:56:04 -0800
From: Randy Dunlap <rdunlap@...radead.org>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>, x86@...nel.org,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Arnd Bergmann <arnd@...db.de>,
"H. Peter Anvin" <hpa@...or.com>
Cc: Andi Kleen <ak@...ux.intel.com>,
Dave Hansen <dave.hansen@...el.com>,
Andy Lutomirski <luto@...capital.net>,
linux-arch@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC, PATCHv1 17/28] x86/mm: define virtual memory map for
5-level paging
On 12/08/16 08:21, Kirill A. Shutemov wrote:
> The first part of memory map (up to %esp fixup) simply scales existing
> map for 4-level paging by factor of 9 -- number of bits addressed by
> additional page table level.
>
> The rest of the map is uncahnged.
unchanged.
(more fixes below)
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> ---
> Documentation/x86/x86_64/mm.txt | 23 ++++++++++++++++++++++-
> arch/x86/Kconfig | 1 +
> arch/x86/include/asm/kasan.h | 9 ++++++---
> arch/x86/include/asm/page_64_types.h | 10 ++++++++++
> arch/x86/include/asm/pgtable_64_types.h | 6 ++++++
> arch/x86/include/asm/sparsemem.h | 9 +++++++--
> 6 files changed, 52 insertions(+), 6 deletions(-)
>
> diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
> index 8c7dd5957ae1..d33fb0799b3d 100644
> --- a/Documentation/x86/x86_64/mm.txt
> +++ b/Documentation/x86/x86_64/mm.txt
> @@ -12,7 +12,7 @@ ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space
> ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole
> ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
> ... unused hole ...
> -ffffec0000000000 - fffffc0000000000 (=44 bits) kasan shadow memory (16TB)
> +ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
> ... unused hole ...
> ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
> ... unused hole ...
> @@ -23,6 +23,27 @@ ffffffffa0000000 - ffffffffff5fffff (=1526 MB) module mapping space
> ffffffffff600000 - ffffffffffdfffff (=8 MB) vsyscalls
> ffffffffffe00000 - ffffffffffffffff (=2 MB) unused hole
>
> +Virtual memory map with 5 level page tables:
> +
> +0000000000000000 - 00ffffffffffffff (=56 bits) user space, different per mm
> +hole caused by [57:63] sign extension
Can you briefly explain the sign extension?
Should that be [56:63]?
> +ff00000000000000 - ff0fffffffffffff (=52 bits) guard hole, reserved for hypervisor
> +ff10000000000000 - ff8fffffffffffff (=55 bits) direct mapping of all phys. memory
> +ff90000000000000 - ff91ffffffffffff (=49 bits) hole
> +ff92000000000000 - ffd1ffffffffffff (=54 bits) vmalloc/ioremap space
> +ffd2000000000000 - ff93ffffffffffff (=49 bits) virtual memory map (512TB)
> +... unused hole ...
> +ff96000000000000 - ffb5ffffffffffff (=53 bits) kasan shadow memory (8PB)
> +... unused hole ...
> +fffe000000000000 - fffeffffffffffff (=49 bits) %esp fixup stacks
> +... unused hole ...
> +ffffffef00000000 - ffffffff00000000 (=64 GB) EFI region mapping space
- fffffffeffffffff
> +... unused hole ...
> +ffffffff80000000 - ffffffffa0000000 (=512 MB) kernel text mapping, from phys 0
- ffffffff9fffffff
> +ffffffffa0000000 - ffffffffff5fffff (=1526 MB) module mapping space
> +ffffffffff600000 - ffffffffffdfffff (=8 MB) vsyscalls
> +ffffffffffe00000 - ffffffffffffffff (=2 MB) unused hole
> +
> The direct mapping covers all memory in the system up to the highest
> memory address (this means in some cases it can also include PCI memory
> holes).
> diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
> index 1410b567ecde..2587c6bd89be 100644
> --- a/arch/x86/include/asm/kasan.h
> +++ b/arch/x86/include/asm/kasan.h
> @@ -11,9 +11,12 @@
> * 'kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT
> */
> #define KASAN_SHADOW_START (KASAN_SHADOW_OFFSET + \
> - (0xffff800000000000ULL >> 3))
> -/* 47 bits for kernel address -> (47 - 3) bits for shadow */
> -#define KASAN_SHADOW_END (KASAN_SHADOW_START + (1ULL << (47 - 3)))
> + ((-1UL << __VIRTUAL_MASK_SHIFT) >> 3))
> +/*
> + * 47 bits for kernel address -> (47 - 3) bits for shadow
> + * 56 bits for kernel address -> (56 - 3) bits fro shadow
typo: s/fro/for/
> + */
> +#define KASAN_SHADOW_END (KASAN_SHADOW_START + (1ULL << (__VIRTUAL_MASK_SHIFT - 3)))
>
> #ifndef __ASSEMBLY__
>
--
~Randy
Powered by blists - more mailing lists