lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191112204707.jyruwkb4pbdj3jvv@gabell>
Date:   Tue, 12 Nov 2019 15:47:08 -0500
From:   Masayoshi Mizuma <msys.mizuma@...il.com>
To:     Baoquan He <bhe@...hat.com>
Cc:     Borislav Petkov <bp@...en8.de>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
        Masayoshi Mizuma <m.mizuma@...fujitsu.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 4/4] x86/mm/KASLR: Adjust the padding size for the
 direct  mapping.

On Mon, Nov 04, 2019 at 08:48:25AM +0800, Baoquan He wrote:
> On 11/01/19 at 09:09pm, Masayoshi Mizuma wrote:
> > ---
> >  arch/x86/mm/kaslr.c | 65 ++++++++++++++++++++++++++++++++++-----------
> >  1 file changed, 50 insertions(+), 15 deletions(-)
> > 
> > diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
> > index dc6182eec..a80eed563 100644
> > --- a/arch/x86/mm/kaslr.c
> > +++ b/arch/x86/mm/kaslr.c
> > @@ -70,15 +70,60 @@ static inline bool kaslr_memory_enabled(void)
> >  	return kaslr_enabled() && !IS_ENABLED(CONFIG_KASAN);
> >  }
> >  
> > +/*
> > + * Even though a huge virtual address space is reserved for the direct
> > + * mapping of physical memory, e.g in 4-level paging mode, it's 64TB,
> > + * rare system can own enough physical memory to use it up, most are
> > + * even less than 1TB. So with KASLR enabled, we adapt the size of
> > + * direct mapping area to the size of actual physical memory plus the
> > + * configured padding CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING.
> > + * The left part will be taken out to join memory randomization.
> > + */
> > +static inline unsigned long calc_direct_mapping_size(void)
> > +{
> > +	unsigned long padding = CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING;
> > +	unsigned long size_tb, memory_tb;
> > +#ifdef CONFIG_MEMORY_HOTPLUG
> > +	unsigned long actual, maximum, base;
> > +
> > +	if (boot_params.max_addr) {
> > +		/*
> > +		 * The padding size should set to get for kaslr_regions[].base
> > +		 * bigger address than the maximum memory address the system can
> > +		 * have. kaslr_regions[].base points "actual size + padding" or
> > +		 * higher address. If "actual size + padding" points the lower
> > +		 * address than the maximum memory size, fix the padding size.
> > +		 */
> > +		actual = roundup(PFN_PHYS(max_pfn), 1UL << TB_SHIFT);
> > +		maximum = roundup(boot_params.max_addr, 1UL << TB_SHIFT);
> > +		base = actual + (padding << TB_SHIFT);
> > +
> > +		if (maximum > base)
> > +			padding = (maximum - actual) >> TB_SHIFT;
> > +	}
> > +#endif
> > +	memory_tb =  DIV_ROUND_UP(max_pfn << PAGE_SHIFT, 1UL << TB_SHIFT) +
> > +			padding;
> 
> Yes, wrapping up the whole adjusting code block for the direct mapping
> area into a function looks much better. This was also suggested by Ingo
> when I posted UV system issue fix before, just later the UV system issue
> is not seen in the current code.
> 
> However, I have a small concern about the memory_tb calculateion here.
> We can treat the (actual RAM + CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING)
> as the default memory_tb, then check if we need adjst it according to
> boot_params.max_addr. Discarding the local padding variable can make
> code much simpler? And it is a little confusing when mix with the
> later padding concept when doing randomization, I mean the get_padding()
> thing.
> 
> 
> 	memory_tb = DIV_ROUND_UP(max_pfn << PAGE_SHIFT, 1UL << TB_SHIFT) +
>                 CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING;
> 
> 	if (boot_params.max_addr) {
> 		maximum = roundup(boot_params.max_addr, 1UL << TB_SHIFT);
> 
> 		if (maximum > memory_tb)
> 			memory_tb = maximum;
> 	}
> #endif
> 
> Personal opinion. Anyway, this patch looks good to me. Thanks.

Your suggesion makes it simpler, thanks!
So I'll modify calc_direct_mapping_size() as following.
Does it make sense?

static inline unsigned long calc_direct_mapping_size(void)
{
       unsigned long size_tb, memory_tb;

       memory_tb = DIV_ROUND_UP(max_pfn << PAGE_SHIFT, 1UL << TB_SHIFT) +
               CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING;

#ifdef CONFIG_MEMORY_HOTPLUG
       if (boot_params.max_addr) {
               unsigned long maximum_tb;

               maximum_tb = DIV_ROUND_UP(boot_params.max_addr,
                               1UL << TB_SHIFT);

               if (maximum_tb > memory_tb)
                       memory_tb = maximum_tb;
       }
#endif
       size_tb = 1 << (MAX_PHYSMEM_BITS - TB_SHIFT);

       /*
        * Adapt physical memory region size based on available memory
        */
       if (memory_tb < size_tb)
               size_tb = memory_tb;

       return size_tb;
}

Thanks,
Masa

> 
> Thanks
> Baoquan
> 
> 
> > +
> > +	size_tb = 1 << (MAX_PHYSMEM_BITS - TB_SHIFT);
> > +
> > +	/*
> > +	 * Adapt physical memory region size based on available memory
> > +	 */
> > +	if (memory_tb < size_tb)
> > +		size_tb = memory_tb;
> > +
> > +	return size_tb;
> > +}
> > +
> >  /* Initialize base and padding for each memory region randomized with KASLR */
> >  void __init kernel_randomize_memory(void)
> >  {
> > -	size_t i;
> > -	unsigned long vaddr_start, vaddr;
> > -	unsigned long rand, memory_tb;
> > -	struct rnd_state rand_state;
> > +	unsigned long vaddr_start, vaddr, rand;
> >  	unsigned long remain_entropy;
> >  	unsigned long vmemmap_size;
> > +	struct rnd_state rand_state;
> > +	size_t i;
> >  
> >  	vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : __PAGE_OFFSET_BASE_L4;
> >  	vaddr = vaddr_start;
> > @@ -95,20 +140,10 @@ void __init kernel_randomize_memory(void)
> >  	if (!kaslr_memory_enabled())
> >  		return;
> >  
> > -	kaslr_regions[0].size_tb = 1 << (MAX_PHYSMEM_BITS - TB_SHIFT);
> > +	kaslr_regions[0].size_tb = calc_direct_mapping_size();
> >  	kaslr_regions[1].size_tb = VMALLOC_SIZE_TB;
> >  
> > -	/*
> > -	 * Update Physical memory mapping to available and
> > -	 * add padding if needed (especially for memory hotplug support).
> > -	 */
> >  	BUG_ON(kaslr_regions[0].base != &page_offset_base);
> > -	memory_tb = DIV_ROUND_UP(max_pfn << PAGE_SHIFT, 1UL << TB_SHIFT) +
> > -		CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING;
> > -
> > -	/* Adapt phyiscal memory region size based on available memory */
> > -	if (memory_tb < kaslr_regions[0].size_tb)
> > -		kaslr_regions[0].size_tb = memory_tb;
> >  
> >  	/*
> >  	 * Calculate the vmemmap region size in TBs, aligned to a TB
> > -- 
> > 2.20.1
> > 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ