lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 11 Sep 2018 20:08:03 +0800
From:   Baoquan He <bhe@...hat.com>
To:     Ingo Molnar <mingo@...nel.org>
Cc:     tglx@...utronix.de, hpa@...or.com, thgarnie@...gle.com,
        kirill.shutemov@...ux.intel.com, x86@...nel.org,
        linux-kernel@...r.kernel.org,
        Peter Zijlstra <a.p.zijlstra@...llo.nl>,
        Kees Cook <keescook@...omium.org>
Subject: Re: [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of
 vmemmap region

On 09/11/18 at 11:28am, Ingo Molnar wrote:
> Yeah, so proper context is still missing, this paragraph appears to assume from the reader a 
> whole lot of prior knowledge, and this is one of the top comments in kaslr.c so there's nowhere 
> else to go read about the background.
> 
> For example what is the range of randomization of each region? Assuming the static, 
> non-randomized description in Documentation/x86/x86_64/mm.txt is correct, in what way does 
> KASLR modify that layout?
> 
> All of this is very opaque and not explained very well anywhere that I could find. We need to 
> generate a proper description ASAP.

OK, let me try to give an context with my understanding. And copy the
static layout of memory regions at below for reference.

Here, Documentation/x86/x86_64/mm.txt is correct, and it's the
guideline for us to manipulate the layout of kernel memory regions.
Originally the starting address of each region is aligned to 512GB
so that they are all mapped at the 0-th entry of PGD table in 4-level
page mapping. Since we are so rich to have 120 TB virtual address space,
they are aligned at 1 TB actually. So randomness comes from three parts
mainly:

1) The direct mapping region for physical memory. 64 TB are reserved to
cover the maximum physical memory support. However, most of systems only
have much less RAM memory than 64 TB, even much less than 1 TB most of
time. We can take the superfluous to join the randomization. This is
often the biggest part.

2) The hole between memory regions, even though they are only 1 TB.

3) KASAN region takes up 16 TB, while it won't take effect when KASLR is
enabled. This is another big part. 

With this superfluous address space as well as changing the starting address
of each memory region to be PUD level, namely 1 GB aligned, we can have
thousands of candidate position to locate those three memory regions.

Above is for 4-level paging mode . As for 5-level, since the virtual
address space is too big, Kirill makes the starting address of regions
P4D aligned, namely 512 GB.


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ffff880000000000 - ffffc7ffffffffff (=64 TB) direct mapping of all phys. memory                                                                   
136T - 200T = 64TB
ffffc80000000000 - ffffc8ffffffffff (=40 bits) hole
200T - 201T = 1TB
ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space
201T - 233T = 32TB
ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole
233T - 234T = 1TB
ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
234T - 235T = 1TB
... unused hole ...
ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
236T - 252T = 16TB
... unused hole ...

Thanks
Baoquan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ