lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 8 Dec 2016 16:40:00 +0800
From:   Baoquan He <bhe@...hat.com>
To:     Alexnader Kuleshov <kuleshovmail@...il.com>
Cc:     linux-kernel@...r.kernel.org, tglx@...utronix.de, hpa@...or.com,
        mingo@...hat.com, x86@...nel.org, keescook@...omium.org,
        yinghai@...nel.org, bp@...e.de, thgarnie@...gle.com,
        luto@...nel.org, anderson@...hat.com, dyoung@...hat.com,
        xlpang@...hat.com
Subject: Re: [PATCH 1/2] x86/64: Make kernel text mapping always take one
 whole page table in early boot code

On 12/08/16 at 02:24pm, Alexnader Kuleshov wrote:
> On 12-08-16, Baoquan He wrote:
> > In early boot code level2_kernel_pgt is used to map kernel text. And its
> > size varies according to KERNEL_IMAGE_SIZE and fixed at compiling time.
> > In fact we can make it always takes 512 entries of one whople page table,
> > because later function cleanup_highmap will clean up the unused entries.
> > With the help of this change kernel text mapping size can be decided at
> > runtime later, 512M if kaslr is disabled, 1G if kaslr is enabled.
> 
> s/whople/whole

Will change. Thanks!

> 
> > Signed-off-by: Baoquan He <bhe@...hat.com>
> > ---
> >  arch/x86/include/asm/page_64_types.h |  3 ++-
> >  arch/x86/kernel/head_64.S            | 15 ++++++++-------
> >  arch/x86/mm/init_64.c                |  2 +-
> >  3 files changed, 11 insertions(+), 9 deletions(-)
> > 
> > diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
> > index 9215e05..62a20ea 100644
> > --- a/arch/x86/include/asm/page_64_types.h
> > +++ b/arch/x86/include/asm/page_64_types.h
> > @@ -56,8 +56,9 @@
> >   * are fully set up. If kernel ASLR is configured, it can extend the
> >   * kernel page table mapping, reducing the size of the modules area.
> >   */
> > +#define KERNEL_MAPPING_SIZE_EXT	(1024 * 1024 * 1024)
> >  #if defined(CONFIG_RANDOMIZE_BASE)
> > -#define KERNEL_IMAGE_SIZE	(1024 * 1024 * 1024)
> > +#define KERNEL_IMAGE_SIZE	KERNEL_MAPPING_SIZE_EXT
> >  #else
> >  #define KERNEL_IMAGE_SIZE	(512 * 1024 * 1024)
> >  #endif
> > diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
> > index b4421cc..c4b40e7c9 100644
> > --- a/arch/x86/kernel/head_64.S
> > +++ b/arch/x86/kernel/head_64.S
> > @@ -453,17 +453,18 @@ NEXT_PAGE(level3_kernel_pgt)
> >  
> >  NEXT_PAGE(level2_kernel_pgt)
> >  	/*
> > -	 * 512 MB kernel mapping. We spend a full page on this pagetable
> > -	 * anyway.
> > +	 * Kernel image size is limited to 512 MB. The kernel code+data+bss
> > +	 * must not be bigger than that.
> >  	 *
> > -	 * The kernel code+data+bss must not be bigger than that.
> > +	 * We spend a full page on this pagetable anyway, so take the whole
> > +	 * page here so that the kernel mapping size can be decided at runtime,
> > +	 * 512M if no kaslr, 1G if kaslr enabled. Later cleanup_highmap will
> > +	 * clean up those unused entries.
> >  	 *
> > -	 * (NOTE: at +512MB starts the module area, see MODULES_VADDR.
> > -	 *  If you want to increase this then increase MODULES_VADDR
> > -	 *  too.)
> > +	 * The module area starts after kernel mapping area.
> >  	 */
> >  	PMDS(0, __PAGE_KERNEL_LARGE_EXEC,
> > -		KERNEL_IMAGE_SIZE/PMD_SIZE)
> > +		PTRS_PER_PMD)
> >  
> >  NEXT_PAGE(level2_fixmap_pgt)
> >  	.fill	506,8,0
> > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> > index 14b9dd7..e95b977 100644
> > --- a/arch/x86/mm/init_64.c
> > +++ b/arch/x86/mm/init_64.c
> > @@ -307,7 +307,7 @@ void __init init_extra_mapping_uc(unsigned long phys, unsigned long size)
> >  void __init cleanup_highmap(void)
> >  {
> >  	unsigned long vaddr = __START_KERNEL_map;
> > -	unsigned long vaddr_end = __START_KERNEL_map + KERNEL_IMAGE_SIZE;
> > +	unsigned long vaddr_end = __START_KERNEL_map + KERNEL_MAPPING_SIZE_EXT;
> >  	unsigned long end = roundup((unsigned long)_brk_end, PMD_SIZE) - 1;
> >  	pmd_t *pmd = level2_kernel_pgt;
> >  
> > -- 
> > 2.5.5
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ