lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 22 Apr 2020 16:17:25 -0400
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     linux-kernel@...r.kernel.org, tglx@...utronix.de,
        jpoimboe@...hat.com, x86@...nel.org, mhiramat@...nel.org,
        mbenes@...e.cz, jthierry@...hat.com, alexandre.chartre@...cle.com
Subject: Re: [PATCH 3/3] x86/ftrace: Do not jump to direct code in created
 trampolines

On Wed, 22 Apr 2020 22:08:08 +0200
Peter Zijlstra <peterz@...radead.org> wrote:

> On Wed, Apr 22, 2020 at 12:25:42PM -0400, Steven Rostedt wrote:
> 
> > @@ -367,6 +371,17 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
> >  	if (WARN_ON(ret < 0))
> >  		goto fail;
> >  
> > +	/* No need to test direct calls on created trampolines */
> > +	if (ops->flags & FTRACE_OPS_FL_SAVE_REGS) {
> > +		/* NOP the jnz 1f; but make sure it's a 2 byte jnz */
> > +		ip = trampoline + (jmp_offset - start_offset);
> > +		if (WARN_ON(*(char *)ip != 0x75))
> > +			goto fail;
> > +		ret = probe_kernel_read(ip, ideal_nops[2], 2);  
> 
> Now you're just being silly, are you really, actually worried you can't
> read ideal_nops[] ?

Hah, that was more cut and paste. I guess a memcpy() would be more
appropriate.


> 
> > +		if (ret < 0)
> > +			goto fail;
> > +	}
> > +
> >  	/*
> >  	 * The address of the ftrace_ops that is used for this trampoline
> >  	 * is stored at the end of the trampoline. This will be used to
> > diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
> > index 0882758d165a..f72ef157feb3 100644
> > --- a/arch/x86/kernel/ftrace_64.S
> > +++ b/arch/x86/kernel/ftrace_64.S
> > @@ -241,6 +241,7 @@ SYM_INNER_LABEL(ftrace_regs_call, SYM_L_GLOBAL)
> >  	 */
> >  	movq ORIG_RAX(%rsp), %rax
> >  	testq	%rax, %rax
> > +SYM_INNER_LABEL(ftrace_regs_caller_jmp, SYM_L_GLOBAL)
> >  	jnz	1f
> >    
> 
> I you worry about performance, it would make more sense to do something
> like so:
> 
> SYM_INNER_LABEL(ftrace_regs_caller_from, SYM_L_GLOBAL)
> 	movq ORIG_RAX(%rsp), %rax
> 	testq	%rax, %rax
> 	jnz	1f
> SYM_INNER_LABEL(ftrace_regs_caller_to, SYM_L_GLOBAL)
> 
> 	if (ops->flags & FTRACE_OPS_FL_SAVE_REGS) {
> 		ip = trampoline + (ftrace_regs_caller_from - start_offset);
> 		((u8[])ip)[0] = JMP8_INSN_OPCODE;
> 		((u8[])ip)[1] = ftrace_regs_caller_to - ftrace_regs_caller_from - JMP8_INSN_SIZE;
> 	}
> 
> Or nop the whole range, but it's like 10 bytes so I'm not sure that's
> actually faster.

That could work too. I'll play with that and actually do some benchmarks to
see how much it affects things.

Thanks!

-- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ