lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190423195023.425902dc@gandalf.local.home>
Date:   Tue, 23 Apr 2019 19:50:23 -0400
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Nicolai Stange <nstange@...e.de>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        "H. Peter Anvin" <hpa@...or.com>,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        Jiri Kosina <jikos@...nel.org>,
        Miroslav Benes <mbenes@...e.cz>,
        Petr Mladek <pmladek@...e.com>, live-patching@...r.kernel.org,
        x86@...nel.org, linux-kernel@...r.kernel.org,
        Andy Lutomirski <luto@...capital.net>
Subject: Re: [RFC PATCH 1/1] x86/ftrace: make ftrace_int3_handler() not to
 skip fops invocation

On Tue, 23 Apr 2019 20:15:49 +0200
Nicolai Stange <nstange@...e.de> wrote:

> > This can be made much simpler by making a hardcoded ftrace_int3_tramp
> > that does the following:
> >
> > ftrace_int3_tramp
> > 	push	%r11
> > 	jmp	ftrace_caller  
> 
> 
> But wouldn't this mean that ftrace_caller could nest if the breakpoint
> in question happened to be placed at ftrace_call? Infinite recursion set
> aside, the ip value determined from inner calls based on the on-stack
> return address would be wrong, AFAICS. Or am I missing something here?

I had a reply here, but I think you explained what I just explained
(and then deleted) below ;-)

> 
> 
> > The ftrace_caller will either call a single ftrace callback, if there's
> > only a single ops registered, or it will call the loop function that
> > iterates over all the ftrace_ops registered and will call each function
> > that matches the ftrace_ops hashes.
> >
> > In either case, it's what we want.  
> 
> Ok, under the assumption that my above point is valid, this patch could
> still get simplified a lot by having two trampolines:
> 
> 1.) Your ftrace_int3_tramp from above, to be used if the patched insn is
>     some mcount call site. The live patching fops will need
>     ftrace_regs_caller though. So,
> 
> 	ftrace_int3_tramp_regs_caller:
> 		push %r11
>                 jmp ftrace_regs_caller
> 
> 2.) Another one redirecting control flow to ftrace_ops_list_func(). It's
>     going to be used when the int3 is found at ftrace_call or
>     ftrace_regs_call resp. their counterparts in some ftrace_ops'
>     trampoline.
> 
> 	ftrace_int3_tramp_list_func:
> 		push %r11
>                 jmp ftrace_ops_list_func

Yes, I wrote a reply basically stating something similar, but then
deleted it after reading what you wrote here!

> 
> ftrace_int3_handler() would then distinguish the following cases:
> 1.) ip == ftrace_graph_call => ignore, i.e. skip the insn

OK, because this transition would be from "call function graph" to
"nop" or the other way. Either case, one would always be a nop.

> 2.) is_ftrace_caller(ip) => ftrace_int3_tramp_list_func
> 3.) ftrace_location(ip) => ftrace_int3_tramp_regs_caller
> 
> ftrace_location() is getting invoked from ftrace_int3_handler() already,
> so there wouldn't be any additional cost.
> 
> If that makes sense to you, I'd prepare a patch...

Yes it does.

> 
> 
> > The ftrace_int3_tramp will simply simulate the call ftrace_caller that
> > would be there as the default caller if more than one function is set
> > to it.
> >
> > For 32 bit, we could add 4 variables on the thread_info and make 4
> > trampolines, one for each context (normal, softirq, irq and NMI), and
> > have them use the variable stored in the thread_info for that location:
> >
> > ftrace_int3_tramp_normal
> > 	push %eax # just for space
> > 	push %eax
> > 	mov whatever_to_get_thread_info, %eax
> > 	mov normal(%eax), %eax
> > 	mov %eax, 4(%esp)
> > 	pop %eax
> > 	jmp ftrace_caller
> >
> > ftrace_int3_tramp_softiqr
> > 	push %eax # just for space
> > 	push %eax
> > 	mov whatever_to_get_thread_info, %eax
> > 	mov softirq(%eax), %eax
> > 	mov %eax, 4(%esp)
> > 	pop %eax
> > 	jmp ftrace_caller
> >
> > etc..
> >
> >
> > A bit crazier for 32 bit but it can still be done. ;-)  
> 
> Ok, but currently CONFIG_HAVE_LIVEPATCH=n for x86 && !x86_64,
> so I'd rather not invest too much energy into screwing 32 bit up ;)
>

Probably not something you care about, but something that I do. Which
means it will have to go on my TODO list. I care about missed functions
being called. This means, if you have something tracing a function, and
then enable something else to trace that same function, you might miss
the first one.

-- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ