lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 13 Nov 2017 17:26:24 +0800
From:   Baoquan He <bhe@...hat.com>
To:     Chao Fan <fanc.fnst@...fujitsu.com>
Cc:     linux-kernel@...r.kernel.org, x86@...nel.org, hpa@...or.com,
        tglx@...utronix.de, mingo@...hat.com, keescook@...omium.org,
        yasu.isimatu@...il.com, indou.takao@...fujitsu.com,
        caoj.fnst@...fujitsu.com, douly.fnst@...fujitsu.com
Subject: Re: [PATCH v2 2/4] kaslr: select the memory region in immovable node
 to process

On 11/13/17 at 05:18pm, Chao Fan wrote:
> On Mon, Nov 13, 2017 at 04:31:31PM +0800, Baoquan He wrote:
> >On 11/01/17 at 07:32pm, Chao Fan wrote:
> >> Compare the region of memmap entry and immovable_mem, then choose the
> >> intersection to process_mem_region.
> >> 
> >> Since the interrelationship between e820 or efi entries and memory
> >> region in immovable_mem is different:
> >> One memory region in one node may contain several entries of e820 or
> >> efi sometimes, and one entry of e820 or efi may contain the memory in
> >> different nodes sometimes.
> >> It may split one node or one entry to several regions.
> >> 
> >> Signed-off-by: Chao Fan <fanc.fnst@...fujitsu.com>
> >> ---
> >>  arch/x86/boot/compressed/kaslr.c | 60 ++++++++++++++++++++++++++++++++++------
> >>  1 file changed, 52 insertions(+), 8 deletions(-)
> >> 
> >> diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
> >> index 0a591c0023f1..fcd640fdeaed 100644
> >> --- a/arch/x86/boot/compressed/kaslr.c
> >> +++ b/arch/x86/boot/compressed/kaslr.c
> >> @@ -634,6 +634,54 @@ static void process_mem_region(struct mem_vector *entry,
> >>  	}
> >>  }
> >>  
> >> +static bool select_immovable_node(struct mem_vector region,
> >> +				  unsigned long long minimum,
> >> +				  unsigned long long image_size)
> >> +{
> >
> >About this patch, I just want to notice two things:
> >1) From the current code, 'movable_node' kernel parameter is exclusive.
> >In find_zone_movable_pfns_for_nodes(), you can see that 'kernelcore' and
> >'movablecore' will be ignored as long as 'movable_node' is specified.
> >Please also consider this in your code here. If 'movable_node' has to be
> >specified too, and need skip the kernel mirror handling.
> >
> 
> Thanks for the notice, I will add the similar operation.

No, I meant if movable_node is specified, we have to ignore
'kernelcore=', then we may need skip the kernel mirror handling. Since
kernel mirror is enabled by 'kernelcore=mirror'.


> 
> >2)process_mem_region() is a key function to process the available memory
> >regions. Please don't make another process_mem_region() like below. You
> >can write a small helper function to find the immovable_mem[] which is
> >intersecting with the passed in memory region and use clamp() to get
> >the real available region. You 'REALLY' don't need to split the region
> >in your so called 'select_immovable_node()' function here.
> >
> 
> OK, I will try to make a new method, which is smaller and better to
> filter the regions.
> 
> Thanks,
> Chao Fan
> 
> >PLEASE elaborate more on these details before post.
> >
> >Thanks
> >Baoquan
> >
> >> +	int i;
> >> +
> >> +	/* If no immovable_mem stored, use region directly */
> >> +	if (num_immovable_region == 0) {
> >> +		process_mem_region(&region, minimum, image_size);
> >> +
> >> +		if (slot_area_index == MAX_SLOT_AREA) {
> >> +			debug_putstr("Aborted memmap scan (slot_areas full)!\n");
> >> +			return 1;
> >> +		}
> >> +	} else {
> >> +		/*
> >> +		 * Walk all immovable regions, and filter the intersection
> >> +		 * to process_mem_region.
> >> +		 */
> >> +		for (i = 0; i < num_immovable_region; i++) {
> >> +			struct mem_vector entry;
> >> +			unsigned long long start, end, select_end, region_end;
> >> +
> >> +			region_end = region.start + region.size - 1;
> >> +			start = immovable_mem[i].start;
> >> +			end = start + immovable_mem[i].size - 1;
> >> +
> >> +			if (region_end < start || region.start > end)
> >> +				continue;
> >> +
> >> +			/* May split one region to several entries. */
> >> +			entry.start = start > region.start ?
> >> +				      start : region.start;
> >> +			select_end = end > region_end ? region_end : end;
> >> +
> >> +			entry.size = select_end - entry.start + 1;
> >> +
> >> +			process_mem_region(&entry, minimum, image_size);
> >> +
> >> +			if (slot_area_index == MAX_SLOT_AREA) {
> >> +				debug_putstr("Aborted memmap scan (slot_areas full)!\n");
> >> +				return 1;
> >> +			}
> >> +		}
> >> +	}
> >> +	return 0;
> >> +}
> >> +
> >>  #ifdef CONFIG_EFI
> >>  /*
> >>   * Returns true if mirror region found (and must have been processed
> >> @@ -699,11 +747,9 @@ process_efi_entries(unsigned long minimum, unsigned long image_size)
> >>  
> >>  		region.start = md->phys_addr;
> >>  		region.size = md->num_pages << EFI_PAGE_SHIFT;
> >> -		process_mem_region(&region, minimum, image_size);
> >> -		if (slot_area_index == MAX_SLOT_AREA) {
> >> -			debug_putstr("Aborted EFI scan (slot_areas full)!\n");
> >> +
> >> +		if (select_immovable_node(region, minimum, image_size))
> >>  			break;
> >> -		}
> >>  	}
> >>  	return true;
> >>  }
> >> @@ -730,11 +776,9 @@ static void process_e820_entries(unsigned long minimum,
> >>  			continue;
> >>  		region.start = entry->addr;
> >>  		region.size = entry->size;
> >> -		process_mem_region(&region, minimum, image_size);
> >> -		if (slot_area_index == MAX_SLOT_AREA) {
> >> -			debug_putstr("Aborted e820 scan (slot_areas full)!\n");
> >> +
> >> +		if (select_immovable_node(region, minimum, image_size))
> >>  			break;
> >> -		}
> >>  	}
> >>  }
> >>  
> >> -- 
> >> 2.13.6
> >> 
> >> 
> >> 
> >
> >
> 
> 

Powered by blists - more mailing lists