[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190314094645.4883-2-bhe@redhat.com>
Date: Thu, 14 Mar 2019 17:46:40 +0800
From: Baoquan He <bhe@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: mingo@...nel.org, keescook@...omium.org, kirill@...temov.name,
yamada.masahiro@...ionext.com, tglx@...utronix.de, bp@...en8.de,
hpa@...or.com, dave.hansen@...ux.intel.com, luto@...nel.org,
peterz@...radead.org, x86@...nel.org, thgarnie@...gle.com,
Baoquan He <bhe@...hat.com>
Subject: [PATCH v4 1/6] x86/mm/KASLR: Improve code comments about struct kaslr_memory_region
The old comment above kaslr_memory_region is not clear enough to explain
the concepts of memory region KASLR.
[Ingo suggested this and helped to prettify the text]
Signed-off-by: Baoquan He <bhe@...hat.com>
---
arch/x86/mm/kaslr.c | 51 +++++++++++++++++++++++++++++++++++++++++----
1 file changed, 47 insertions(+), 4 deletions(-)
diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 9a8756517504..5debf82ab06a 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -42,9 +42,46 @@
static const unsigned long vaddr_end = CPU_ENTRY_AREA_BASE;
/*
- * Memory regions randomized by KASLR (except modules that use a separate logic
- * earlier during boot). The list is ordered based on virtual addresses. This
- * order is kept after randomization.
+ * struct kaslr_memory_region - represent continuous chunks of kernel
+ * virtual memory regions, to be randomized by KASLR.
+ *
+ * ( The exception is the module space virtual memory window which
+ * uses separate logic earlier during bootup. )
+ *
+ * Currently there are three such regions: the physical memory mapping,
+ * vmalloc and vmemmap regions.
+ *
+ * The array below has the entries ordered based on virtual addresses.
+ * The order is kept after randomization, i.e. the randomized virtual
+ * addresses of these regions are still ascending.
+ *
+ * Here are the fields:
+ *
+ * @base: points to a global variable used by the MM to get the virtual
+ * base address of any of the above regions. This allows the early KASLR
+ * KASLR code to modify these base addresses early during bootup, on a
+ * per bootup basis, without the MM code even being aware of whether it
+ * got changed and to what value.
+ *
+ * When KASLR is active then the MM code makes sure that for each region
+ * there's such a single, dynamic, global base address 'unsigned long'
+ * variable available for the KASLR code to point to and modify directly:
+ *
+ * { &page_offset_base, 0 },
+ * { &vmalloc_base, 0 },
+ * { &vmemmap_base, 1 },
+ *
+ * @size_tb: size in TB of each memory region. E.g, the sizes in 4-level
+ * pageing mode are:
+ *
+ * - Physical memory mapping: (actual RAM size + 10 TB padding)
+ * - Vmalloc: 32 TB
+ * - Vmemmap: 1 TB
+ *
+ * As seen, the size of the physical memory mapping region is variable,
+ * calculated according to the actual size of system RAM in order to
+ * save more space for randomization. The rest are fixed values related
+ * to paging mode.
*/
static __initdata struct kaslr_memory_region {
unsigned long *base;
@@ -70,7 +107,13 @@ static inline bool kaslr_memory_enabled(void)
return kaslr_enabled() && !IS_ENABLED(CONFIG_KASAN);
}
-/* Initialize base and padding for each memory region randomized with KASLR */
+/*
+ * kernel_randomize_memory - initialize base and padding for each
+ * memory region randomized with KASLR.
+ *
+ * When randomize the layout, their order are kept, still the physical
+ * memory mapping region is handled firstly, next vmalloc and vmemmap.
+ */
void __init kernel_randomize_memory(void)
{
size_t i;
--
2.17.2
Powered by blists - more mailing lists