lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d22b27b5-6b76-6124-efff-fd577a8f482e@arm.com>
Date:   Sat, 2 Nov 2019 17:50:02 +0530
From:   Amit Daniel Kachhap <amit.kachhap@....com>
To:     Mark Rutland <mark.rutland@....com>,
        linux-arm-kernel@...ts.infradead.org
Cc:     linux-kernel@...r.kernel.org, catalin.marinas@....com,
        deller@....de, duwe@...e.de, James.Bottomley@...senPartnership.com,
        james.morse@....com, jeyu@...nel.org, jpoimboe@...hat.com,
        jthierry@...hat.com, linux-parisc@...r.kernel.org,
        mingo@...hat.com, peterz@...radead.org, rostedt@...dmis.org,
        svens@...ckframe.org, takahiro.akashi@...aro.org, will@...nel.org
Subject: Re: [PATCHv2 4/8] arm64: module/ftrace: intialize PLT at load time

Hi,

On 10/29/19 10:28 PM, Mark Rutland wrote:
> Currently we lazily-initialize a module's ftrace PLT at runtime when we
> install the first ftrace call. To do so we have to apply a number of
> sanity checks, transiently mark the module text as RW, and perform an
> IPI as part of handling Neoverse-N1 erratum #1542419.
> 
> We only expect the ftrace trampoline to point at ftrace_caller() (AKA
> FTRACE_ADDR), so let's simplify all of this by intializing the PLT at
> module load time, before the module loader marks the module RO and
> performs the intial I-cache maintenance for the module.
> 
> Thus we can rely on the module having been correctly intialized, and can
> simplify the runtime work necessary to install an ftrace call in a
> module. This will also allow for the removal of module_disable_ro().
> 
> Tested by forcing ftrace_make_call() to use the module PLT, and then
> loading up a module after setting up ftrace with:
> 
> | echo ":mod:<module-name>" > set_ftrace_filter;
> | echo function > current_tracer;
> | modprobe <module-name>
> 
> Since FTRACE_ADDR is only defined when CONFIG_DYNAMIC_FTRACE is
> selected, we wrap its use along with most of module_init_ftrace_plt()
> with ifdeffery rather than using IS_ENABLED().
> 
> Signed-off-by: Mark Rutland <mark.rutland@....com>
> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@...aro.org>
> Cc: Catalin Marinas <catalin.marinas@....com>
> Cc: James Morse <james.morse@....com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Will Deacon <will@...nel.org>
> ---
>   arch/arm64/kernel/ftrace.c | 55 ++++++++++++----------------------------------
>   arch/arm64/kernel/module.c | 32 +++++++++++++++++----------
>   2 files changed, 35 insertions(+), 52 deletions(-)
> 
> diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
> index 06e56b470315..822718eafdb4 100644
> --- a/arch/arm64/kernel/ftrace.c
> +++ b/arch/arm64/kernel/ftrace.c
> @@ -73,10 +73,22 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
>   
>   	if (offset < -SZ_128M || offset >= SZ_128M) {
>   #ifdef CONFIG_ARM64_MODULE_PLTS
> -		struct plt_entry trampoline, *dst;
>   		struct module *mod;
>   
>   		/*
> +		 * There is only one ftrace trampoline per module. For now,
> +		 * this is not a problem since on arm64, all dynamic ftrace
> +		 * invocations are routed via ftrace_caller(). This will need
> +		 * to be revisited if support for multiple ftrace entry points
> +		 * is added in the future, but for now, the pr_err() below
> +		 * deals with a theoretical issue only.
> +		 */
> +		if (addr != FTRACE_ADDR) {
> +			pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n");
> +			return -EINVAL;
> +		}
> +
> +		/*
>   		 * On kernels that support module PLTs, the offset between the
>   		 * branch instruction and its target may legally exceed the
>   		 * range of an ordinary relative 'bl' opcode. In this case, we
> @@ -93,46 +105,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
>   		if (WARN_ON(!mod))
>   			return -EINVAL;
>   
> -		/*
> -		 * There is only one ftrace trampoline per module. For now,
> -		 * this is not a problem since on arm64, all dynamic ftrace
> -		 * invocations are routed via ftrace_caller(). This will need
> -		 * to be revisited if support for multiple ftrace entry points
> -		 * is added in the future, but for now, the pr_err() below
> -		 * deals with a theoretical issue only.
> -		 *
> -		 * Note that PLTs are place relative, and plt_entries_equal()
> -		 * checks whether they point to the same target. Here, we need
> -		 * to check if the actual opcodes are in fact identical,
> -		 * regardless of the offset in memory so use memcmp() instead.
> -		 */
> -		dst = mod->arch.ftrace_trampoline;
> -		trampoline = get_plt_entry(addr, dst);
> -		if (memcmp(dst, &trampoline, sizeof(trampoline))) {
> -			if (plt_entry_is_initialized(dst)) {
> -				pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n");
> -				return -EINVAL;
> -			}
> -
> -			/* point the trampoline to our ftrace entry point */
> -			module_disable_ro(mod);
> -			*dst = trampoline;
> -			module_enable_ro(mod, true);
> -
> -			/*
> -			 * Ensure updated trampoline is visible to instruction
> -			 * fetch before we patch in the branch. Although the
> -			 * architecture doesn't require an IPI in this case,
> -			 * Neoverse-N1 erratum #1542419 does require one
> -			 * if the TLB maintenance in module_enable_ro() is
> -			 * skipped due to rodata_enabled. It doesn't seem worth
> -			 * it to make it conditional given that this is
> -			 * certainly not a fast-path.
> -			 */
> -			flush_icache_range((unsigned long)&dst[0],
> -					   (unsigned long)&dst[1]);
> -		}
> -		addr = (unsigned long)dst;
> +		addr = (unsigned long)mod->arch.ftrace_trampoline;
>   #else /* CONFIG_ARM64_MODULE_PLTS */
>   		return -EINVAL;
>   #endif /* CONFIG_ARM64_MODULE_PLTS */
> diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
> index 763a86d52fef..5f5bc3b94da7 100644
> --- a/arch/arm64/kernel/module.c
> +++ b/arch/arm64/kernel/module.c
> @@ -9,6 +9,7 @@
>   
>   #include <linux/bitops.h>
>   #include <linux/elf.h>
> +#include <linux/ftrace.h>
>   #include <linux/gfp.h>
>   #include <linux/kasan.h>
>   #include <linux/kernel.h>
> @@ -485,24 +486,33 @@ static const Elf_Shdr *find_section(const Elf_Ehdr *hdr,
>   	return NULL;
>   }
>   
> +int module_init_ftrace_plt(const Elf_Ehdr *hdr,
> +			   const Elf_Shdr *sechdrs,
> +			   struct module *mod)
I think this function can be made static as it is not used anywhere.

Thanks,
Amit Daniel
> +{
> +#if defined(CONFIG_ARM64_MODULE_PLTS) && defined(CONFIG_DYNAMIC_FTRACE)
> +	const Elf_Shdr *s;
> +	struct plt_entry *plt;
> +
> +	s = find_section(hdr, sechdrs, ".text.ftrace_trampoline");
> +	if (!s)
> +		return -ENOEXEC;
> +
> +	plt = (void *)s->sh_addr;
> +	*plt = get_plt_entry(FTRACE_ADDR, plt);
> +	mod->arch.ftrace_trampoline = plt;
> +#endif
> +	return 0;
> +}
> +
>   int module_finalize(const Elf_Ehdr *hdr,
>   		    const Elf_Shdr *sechdrs,
>   		    struct module *me)
>   {
>   	const Elf_Shdr *s;
> -
>   	s = find_section(hdr, sechdrs, ".altinstructions");
>   	if (s)
>   		apply_alternatives_module((void *)s->sh_addr, s->sh_size);
>   
> -#ifdef CONFIG_ARM64_MODULE_PLTS
> -	if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE)) {
> -		s = find_section(hdr, sechdrs, ".text.ftrace_trampoline");
> -		if (!s)
> -			return -ENOEXEC;
> -		me->arch.ftrace_trampoline = (void *)s->sh_addr;
> -	}
> -#endif
> -
> -	return 0;
> +	return module_init_ftrace_plt(hdr, sechdrs, me);
>   }
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ