lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180517032702.GA6521@localhost.localdomain>
Date:   Thu, 17 May 2018 11:27:02 +0800
From:   Chao Fan <fanc.fnst@...fujitsu.com>
To:     Baoquan He <bhe@...hat.com>
CC:     <linux-kernel@...r.kernel.org>, <mingo@...nel.org>,
        <lcapitulino@...hat.com>, <keescook@...omium.org>,
        <tglx@...utronix.de>, <x86@...nel.org>, <hpa@...or.com>,
        <yasu.isimatu@...il.com>, <indou.takao@...fujitsu.com>,
        <douly.fnst@...fujitsu.com>
Subject: Re: [PATCH 1/2] x86/boot/KASLR: Add two functions for 1GB huge pages
 handling

Hi Baoquan,

I have reviewed the patch, I think the caculation of address has no
problem. But maybe I miss something, so I have several questions.

On Wed, May 16, 2018 at 06:05:31PM +0800, Baoquan He wrote:
>Functions parse_gb_huge_pages() and process_gb_huge_page() are introduced to
>handle conflict between KASLR and huge pages, will be used in the next patch.
>
>Function parse_gb_huge_pages() is used to parse kernel command-line to get
>how many 1GB huge pages have been specified. A static global variable
>'max_gb_huge_pages' is added to store the number.
>
>And process_gb_huge_page() is used to skip as many 1GB huge pages as possible
>from the passed in memory region according to the specified number.
>
>Signed-off-by: Baoquan He <bhe@...hat.com>
>---
> arch/x86/boot/compressed/kaslr.c | 71 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 71 insertions(+)
>
>diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
>index a0a50b91ecef..13bd879cdc5d 100644
>--- a/arch/x86/boot/compressed/kaslr.c
>+++ b/arch/x86/boot/compressed/kaslr.c
>@@ -215,6 +215,32 @@ static void mem_avoid_memmap(char *str)
> 		memmap_too_large = true;
> }
> 
>+/* Store the number of 1GB huge pages which user specified.*/
>+static unsigned long max_gb_huge_pages;
>+
>+static int parse_gb_huge_pages(char *param, char* val)
>+{
>+	char *p;
>+	u64 mem_size;
>+	static bool gbpage_sz = false;
>+
>+	if (!strcmp(param, "hugepagesz")) {
>+		p = val;
>+		mem_size = memparse(p, &p);
>+		if (mem_size == PUD_SIZE) {
>+			if (gbpage_sz)
>+				warn("Repeadly set hugeTLB page size of 1G!\n");
>+			gbpage_sz = true;
>+		} else
>+			gbpage_sz = false;
>+	} else if (!strcmp(param, "hugepages") && gbpage_sz) {
>+		p = val;
>+		max_gb_huge_pages = simple_strtoull(p, &p, 0);
>+		debug_putaddr(max_gb_huge_pages);
>+	}
>+}
>+
>+
> static int handle_mem_memmap(void)
> {
> 	char *args = (char *)get_cmd_line_ptr();
>@@ -466,6 +492,51 @@ static void store_slot_info(struct mem_vector *region, unsigned long image_size)
> 	}
> }
> 
>+/* Skip as many 1GB huge pages as possible in the passed region. */
>+static void process_gb_huge_page(struct mem_vector *region, unsigned long image_size)
>+{
>+	int i = 0;
>+	unsigned long addr, size;
>+	struct mem_vector tmp;
>+
>+	if (!max_gb_huge_pages) {
>+		store_slot_info(region, image_size);
>+		return;
>+	}
>+
>+	addr = ALIGN(region->start, PUD_SIZE);
>+	/* If Did we raise the address above the passed in memory entry? */
>+	if (addr < region->start + region->size)
>+		size = region->size - (addr - region->start);
>+
>+	/* Check how many 1GB huge pages can be filtered out*/
>+	while (size > PUD_SIZE && max_gb_huge_pages) {
>+		size -= PUD_SIZE;
>+		max_gb_huge_pages--;

The global variable 'max_gb_huge_pages' means how many huge pages
user specified when you get it from command line.
But here, everytime we find a position which is good for huge page
allocation, the 'max_gdb_huge_page' decreased. So in my understanding,
it is used to store how many huge pages that we still need to search memory
for good slots to filter out, right?
If it's right, maybe the name 'max_gb_huge_pages' is not very suitable.
If my understanding is wrong, please tell me.

>+		i++;
>+	}
>+
>+	if (!i) {
>+		store_slot_info(region, image_size);
>+		return;
>+	}
>+
>+	/* Process the remaining regions after filtering out. */
>+
This line may be unusable.
>+	if (addr >= region->start + image_size) {
>+		tmp.start = region->start;
>+		tmp.size = addr - region->start;
>+		store_slot_info(&tmp, image_size);
>+	}
>+
>+	size  = region->size - (addr - region->start) - i * PUD_SIZE;
>+        if (size >= image_size) {
>+		tmp.start = addr + i*PUD_SIZE;
>+		tmp.size = size;
>+		store_slot_info(&tmp, image_size);
>+        }

I have another question not related to kaslr.
Here you try to avoid the memory from addr to (addr + i * PUD_SIZE),
but I wonder if after walking all memory regions, 'max_gb_huge_pages'
is still more than 0, which means there isn't enough memory slots for
huge page, what will happen?

Thanks,
Chao Fan

>+}
>+
> static unsigned long slots_fetch_random(void)
> {
> 	unsigned long slot;
>-- 
>2.13.6
>
>
>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ