lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 25 Feb 2016 11:04:08 +1100
From:	Balbir Singh <bsingharora@...il.com>
To:	Michael Ellerman <mpe@...erman.id.au>, linuxppc-dev@...abs.org
Cc:	duwe@....de, linux-kernel@...r.kernel.org, rostedt@...dmis.org,
	kamalesh@...ux.vnet.ibm.com, pmladek@...e.com, jeyu@...hat.com,
	jkosina@...e.cz, live-patching@...r.kernel.org, mbenes@...e.cz
Subject: Re: [PATCH 02/12] powerpc/module: Mark module stubs with a magic
 value



On 25/02/16 01:28, Michael Ellerman wrote:
> When a module is loaded, calls out to the kernel go via a stub which is
> generated at runtime. One of these stubs is used to call _mcount(),
> which is the default target of tracing calls generated by the compiler
> with -pg.
>
> If dynamic ftrace is enabled (which it typicall is), another stub is
> used to call ftrace_caller(), which is the target of tracing calls when
> ftrace is actually active.
>
> ftrace then wants to disable the calls to _mcount() at module startup,
> and enable/disable the calls to ftrace_caller() when enabling/disablig
> tracing - all of these it does by patching the code.
>
> As part of that code patching, the ftrace code wants to confirm that the
> branch it is about to modify, is in fact a call to a module stub which
> calls _mcount() or ftrace_caller().
>
> Currently it does that by inspecting the instructions and confirming
> they are what it expects. Although that works, the code to do it is
> pretty intricate because it requires lots of knowledge about the exact
> format of the stub.
>
> We can make that process easier by marking the generated stubs with a
> magic value, and then looking for that magic value. Altough this is not
> as rigorous as the current method, I believe it is sufficient in
> practice.
>
> Signed-off-by: Michael Ellerman <mpe@...erman.id.au>
> ---
>  arch/powerpc/include/asm/module.h |  3 +-
>  arch/powerpc/kernel/ftrace.c      | 14 ++-----
>  arch/powerpc/kernel/module_64.c   | 78 +++++++++++++--------------------------
>  3 files changed, 31 insertions(+), 64 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/module.h b/arch/powerpc/include/asm/module.h
> index 74d25a749018..5b6b5a427b54 100644
> --- a/arch/powerpc/include/asm/module.h
> +++ b/arch/powerpc/include/asm/module.h
> @@ -78,8 +78,7 @@ struct mod_arch_specific {
>  #    endif	/* MODULE */
>  #endif
>  
> -bool is_module_trampoline(u32 *insns);
> -int module_trampoline_target(struct module *mod, u32 *trampoline,
> +int module_trampoline_target(struct module *mod, unsigned long trampoline,
>  			     unsigned long *target);
>  
>  #ifdef CONFIG_DYNAMIC_FTRACE
> diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
> index 44d4d8eb3c85..4505cbfd0e13 100644
> --- a/arch/powerpc/kernel/ftrace.c
> +++ b/arch/powerpc/kernel/ftrace.c
> @@ -106,10 +106,9 @@ static int
>  __ftrace_make_nop(struct module *mod,
>  		  struct dyn_ftrace *rec, unsigned long addr)
>  {
> -	unsigned int op;
> -	unsigned long entry, ptr;
> +	unsigned long entry, ptr, tramp;
>  	unsigned long ip = rec->ip;
> -	void *tramp;
> +	unsigned int op;
>  
>  	/* read where this goes */
>  	if (probe_kernel_read(&op, (void *)ip, sizeof(int)))
> @@ -122,14 +121,9 @@ __ftrace_make_nop(struct module *mod,
>  	}
>  
>  	/* lets find where the pointer goes */
> -	tramp = (void *)find_bl_target(ip, op);
> -
> -	pr_devel("ip:%lx jumps to %p", ip, tramp);
> +	tramp = find_bl_target(ip, op);
>  
> -	if (!is_module_trampoline(tramp)) {
> -		pr_err("Not a trampoline\n");
> -		return -EINVAL;
> -	}
> +	pr_devel("ip:%lx jumps to %lx", ip, tramp);
>  
>  	if (module_trampoline_target(mod, tramp, &ptr)) {
>  		pr_err("Failed to get trampoline target\n");
> diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
> index 599c753c7960..9629966e614b 100644
> --- a/arch/powerpc/kernel/module_64.c
> +++ b/arch/powerpc/kernel/module_64.c
> @@ -96,6 +96,8 @@ static unsigned int local_entry_offset(const Elf64_Sym *sym)
>  }
>  #endif
>  
> +#define STUB_MAGIC 0x73747562 /* stub */
> +
>  /* Like PPC32, we need little trampolines to do > 24-bit jumps (into
>     the kernel itself).  But on PPC64, these need to be used for every
>     jump, actually, to reset r2 (TOC+0x8000). */
> @@ -105,7 +107,8 @@ struct ppc64_stub_entry
>  	 * need 6 instructions on ABIv2 but we always allocate 7 so
>  	 * so we don't have to modify the trampoline load instruction. */
>  	u32 jump[7];
> -	u32 unused;
> +	/* Used by ftrace to identify stubs */
> +	u32 magic;

>  	/* Data for the above code */
>  	func_desc_t funcdata;
>  };
> @@ -139,70 +142,39 @@ static u32 ppc64_stub_insns[] = {
>  };
>  
>  #ifdef CONFIG_DYNAMIC_FTRACE
> -
> -static u32 ppc64_stub_mask[] = {
> -	0xffff0000,
> -	0xffff0000,
> -	0xffffffff,
> -	0xffffffff,
> -#if !defined(_CALL_ELF) || _CALL_ELF != 2
> -	0xffffffff,
> -#endif
> -	0xffffffff,
> -	0xffffffff
> -};
> -
> -bool is_module_trampoline(u32 *p)
> +int module_trampoline_target(struct module *mod, unsigned long addr,
> +			     unsigned long *target)
>  {
> -	unsigned int i;
> -	u32 insns[ARRAY_SIZE(ppc64_stub_insns)];
> -
> -	BUILD_BUG_ON(sizeof(ppc64_stub_insns) != sizeof(ppc64_stub_mask));
> +	struct ppc64_stub_entry *stub;
> +	func_desc_t funcdata;
> +	u32 magic;
>  
> -	if (probe_kernel_read(insns, p, sizeof(insns)))
> +	if (!within_module_core(addr, mod)) {
> +		pr_err("%s: stub %lx not in module %s\n", __func__, addr, mod->name);
>  		return -EFAULT;
-EFAULT or -EINVAL? I wonder if we can recover from a bad trampoline address.

Reviewed-by: Balbir Singh <bsingharora@...il.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ