lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 9 Sep 2011 14:23:25 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Michael Holzheu <holzheu@...ux.vnet.ibm.com>
Cc:	ebiederm@...ssion.com, mahesh@...ux.vnet.ibm.com,
	schwidefsky@...ibm.com, heiko.carstens@...ibm.com,
	kexec@...ts.infradead.org, linux-kernel@...r.kernel.org,
	linux-s390@...r.kernel.org
Subject: Re: [RFC][patch 1/2] kdump: Add infrastructure for unmapping
 crashkernel memory

On Thu, Sep 08, 2011 at 03:26:10PM +0200, Michael Holzheu wrote:
> From: Michael Holzheu <holzheu@...ux.vnet.ibm.com>
> 
> This patch introduces a mechanism that allows architecture backends to
> remove page tables for the crashkernel memory. This can protect the loaded
> kdump kernel from being overwritten by broken kernel code.
> A new function crash_map_pages() is added that can be implemented by
> architecture code. This function has the following syntax:

I guess having separate functions for mapping and unmapping pages might
look cleaner. Because we are not passing a page range, so specifying
what pages we are talking about in function name might make it more
clear.

crash_map_reserved_pages()
crash_unmap_reserved_pages().

Secondly, what happens to the code which runs after crash (crash_kexec()).
Current x86 code assumes that reserved region is mapped at the time of
crash and does few things with control page there. 

So this generic approach is not valid atleast for x86, because it does
not tackle the scenario about how to map reserved range again once 
kernel crashes. It will only work if there is assumption that after
a crash, we don't expect reserved range/pages to be mapped.

Thanks
Vivek
 

> 
> void crash_map_pages(int enable);
> 
> "enable" can be 0 for removing or 1 for adding page tables.  The function is
> called before and after the crashkernel segments are loaded. It is also
> called in crash_shrink_memory() to create new page tables when the
> crashkernel memory size is reduced.
> 
> To support architectures that have large pages this patch also introduces
> a new define KEXEC_CRASH_MEM_ALIGN. The crashkernel start and size must 
> always be aligned with KEXEC_CRASH_MEM_ALIGN.
> 
> Signed-off-by: Michael Holzheu <holzheu@...ux.vnet.ibm.com>
> ---
>  include/linux/kexec.h |    5 +++++
>  kernel/kexec.c        |   16 ++++++++++++++--
>  2 files changed, 19 insertions(+), 2 deletions(-)
> 
> --- a/include/linux/kexec.h
> +++ b/include/linux/kexec.h
> @@ -37,6 +37,10 @@
>  #define KEXEC_CRASH_CONTROL_MEMORY_LIMIT KEXEC_CONTROL_MEMORY_LIMIT
>  #endif
>  
> +#ifndef KEXEC_CRASH_MEM_ALIGN
> +#define KEXEC_CRASH_MEM_ALIGN PAGE_SIZE
> +#endif
> +
>  #define KEXEC_NOTE_HEAD_BYTES ALIGN(sizeof(struct elf_note), 4)
>  #define KEXEC_CORE_NOTE_NAME "CORE"
>  #define KEXEC_CORE_NOTE_NAME_BYTES ALIGN(sizeof(KEXEC_CORE_NOTE_NAME), 4)
> @@ -133,6 +137,7 @@ extern void crash_kexec(struct pt_regs *
>  int kexec_should_crash(struct task_struct *);
>  void crash_save_cpu(struct pt_regs *regs, int cpu);
>  void crash_save_vmcoreinfo(void);
> +void crash_map_pages(int enable);
>  void arch_crash_save_vmcoreinfo(void);
>  void vmcoreinfo_append_str(const char *fmt, ...)
>  	__attribute__ ((format (printf, 1, 2)));
> --- a/kernel/kexec.c
> +++ b/kernel/kexec.c
> @@ -999,6 +999,7 @@ SYSCALL_DEFINE4(kexec_load, unsigned lon
>  			kimage_free(xchg(&kexec_crash_image, NULL));
>  			result = kimage_crash_alloc(&image, entry,
>  						     nr_segments, segments);
> +			crash_map_pages(1);
>  		}
>  		if (result)
>  			goto out;
> @@ -1015,6 +1016,8 @@ SYSCALL_DEFINE4(kexec_load, unsigned lon
>  				goto out;
>  		}
>  		kimage_terminate(image);
> +		if (flags & KEXEC_ON_CRASH)
> +			crash_map_pages(0);
>  	}
>  	/* Install the new kernel, and  Uninstall the old */
>  	image = xchg(dest_image, image);
> @@ -1026,6 +1029,13 @@ out:
>  	return result;
>  }
>  
> +/*
> + * provide an empty default implementation here -- architecture
> + * code may override this
> + */
> +void __weak crash_map_pages(int enable)
> +{}
> +
>  #ifdef CONFIG_COMPAT
>  asmlinkage long compat_sys_kexec_load(unsigned long entry,
>  				unsigned long nr_segments,
> @@ -1134,14 +1144,16 @@ int crash_shrink_memory(unsigned long ne
>  		goto unlock;
>  	}
>  
> -	start = roundup(start, PAGE_SIZE);
> -	end = roundup(start + new_size, PAGE_SIZE);
> +	start = roundup(start, KEXEC_CRASH_MEM_ALIGN);
> +	end = roundup(start + new_size, KEXEC_CRASH_MEM_ALIGN);
>  
> +	crash_map_pages(1);
>  	crash_free_reserved_phys_range(end, crashk_res.end);
>  
>  	if ((start == end) && (crashk_res.parent != NULL))
>  		release_resource(&crashk_res);
>  	crashk_res.end = end - 1;
> +	crash_map_pages(0);
>  
>  unlock:
>  	mutex_unlock(&kexec_mutex);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ