lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 24 Jan 2019 09:38:02 +1300
From:   "Singh, Balbir" <bsingharora@...il.com>
To:     Torsten Duwe <duwe@....de>
Cc:     Mark Rutland <mark.rutland@....com>,
        Will Deacon <will.deacon@....com>,
        Catalin Marinas <catalin.marinas@....com>,
        Julien Thierry <julien.thierry@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        Ingo Molnar <mingo@...hat.com>,
        Ard Biesheuvel <ard.biesheuvel@...aro.org>,
        Arnd Bergmann <arnd@...db.de>,
        AKASHI Takahiro <takahiro.akashi@...aro.org>,
        Amit Daniel Kachhap <amit.kachhap@....com>,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        live-patching@...r.kernel.org
Subject: Re: [PATCH v7 2/3] arm64: implement ftrace with regs



On 1/23/19 2:09 AM, Torsten Duwe wrote:
> Hi Balbir!
> 

Hi, Torsten!

> On Tue, Jan 22, 2019 at 02:39:32PM +1300, Singh, Balbir wrote:
>>
>> On 1/19/19 5:39 AM, Torsten Duwe wrote:
>>> + */
>>> +ftrace_common_return:
>>> +	/* restore function args */
>>> +	ldp	x0, x1, [sp]
>>> +	ldp	x2, x3, [sp, #S_X2]
>>> +	ldp	x4, x5, [sp, #S_X4]
>>> +	ldp	x6, x7, [sp, #S_X6]
>>> +	ldr	x8, [sp, #S_X8]
>>> +
>>> +	/* restore fp and x28 */
>>> +	ldp	x28, x29, [sp, #S_X28]
>>> +
>>> +	ldr	lr, [sp, #S_LR]
>>> +	ldr	x9, [sp, #S_PC]
>>
>> Is it fair to assume that we never modify registers beyond LR and PC as a result of ftrace/livepatching? I presume it is, but just checking.
> 
> These are either callee-save or scratch. Whatever is called, ftrace framework
> functions or replacement functions, must preserve the callee-saved regs; and
> the caller, who made a function call (sic!-) saves caller-saved and marks the
> rest dead on return. So it's the arguments that matter after all.
> 
> As you can see, disabling IPA-RA is cruicial here.
> 
> Or are you talking about deliberate argument manipulation?

I wonder if another use of ftrace via register_ftrace_ops can expect to modify arguments, like we modify the PC and RA

> 
>>> +	unsigned long pc = rec->ip + REC_IP_BRANCH_OFFSET;
>>> +	u32 old, new;
>>> +
>>> +	old = aarch64_insn_gen_branch_imm(pc, old_addr, true);
>>> +	new = aarch64_insn_gen_branch_imm(pc, addr, true);
>>> +
>>
>> Is this a branch or a call? Does addr always fit in the immediate limits?
> 
> As Julien has now pointed out, the correct enum value AARCH64_INSN_BRANCH_LINK
> should clarify this. It will surely fit for the kernel proper, and the modules
> are handled with the trampolines.

OK, so there is an assumption of the size of vmlinux being in a certain range? I suspect something like a allyesconfig with debug might be a worthy challenger :) Yes, modules do get trampolines.

> 
>>> +	return ftrace_modify_code(pc, old, new, true);
>>
>> Can you talk to the semantics of whether this operation is atomic w.r.t system? Will old and new return consistent values? Given the nature of ftrace, I presume it's well isolated. 
> 
> aarch64_insn_patch_text_nosync() does a __flush_icache_range() on success.
> Mark wrote that this is already sufficient IIRC. (I had memory barriers
> there, when I was still trying to modify 2 insns every time).
> 
>>
>>> +	if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) &&
>>> +		addr == MCOUNT_ADDR) {
>>> +		old = aarch64_insn_gen_nop();
>>> +		new = MOV_X9_X30;
>>> +		pc -= REC_IP_BRANCH_OFFSET;
>>> +		return ftrace_modify_code(pc, old, new, validate);
>>
>> I presume all the icache flush and barrier handling is in ftrace_modify_code()?
> 
> Yes, see above.
> 

Thanks!

>>> +	}
>>> +
>>>  	if (offset < -SZ_128M || offset >= SZ_128M) {
>>>  #ifdef CONFIG_ARM64_MODULE_PLTS
>>>  		u32 replaced;
>>> --- a/arch/arm64/include/asm/module.h
>>> +++ b/arch/arm64/include/asm/module.h
>>> @@ -32,7 +32,8 @@ struct mod_arch_specific {
>>>  	struct mod_plt_sec	init;
>>>  
>>>  	/* for CONFIG_DYNAMIC_FTRACE */
>>> -	struct plt_entry 	*ftrace_trampoline;
>>> +	struct plt_entry	*ftrace_trampolines;
>>> +#define MOD_ARCH_NR_FTRACE_TRAMPOLINES	2
>>
>> I don't see the generation of ftrace_trampolines[1]
>>
> 
> That was further up, install_ftrace_trampoline() in kernel/ftrace.c.
> 

Thanks!

> +       if (*addr == FTRACE_ADDR)
> +               mod_trampoline = &mod->arch.ftrace_trampolines[0];
> +       else if (*addr == FTRACE_REGS_ADDR)
> +               mod_trampoline = &mod->arch.ftrace_trampolines[1];
> [...]
> +       trampoline = get_plt_entry(*addr, mod_trampoline);
> +
> +       if (!plt_entries_equal(mod_trampoline, &trampoline)) {
> [...]
> 
> get_plt_entry() generates a small bunch of instructions that easily
> fit into the argument registers. Compare commit bdb85cd1d20669dfae8
> for the new trampoline insns.
> 
> Hope I've covered all your concerns,
> 

Yes, largely thank you

Balbir

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ