lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1803012057300.1391@nanos.tec.linutronix.de>
Date:   Thu, 1 Mar 2018 22:12:56 +0100 (CET)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Yang Bo <yangbo@...pin.com>
cc:     Ingo Molnar <mingo@...hat.com>, "H . Peter Anvin" <hpa@...or.com>,
        x86@...nel.org, Masami Hiramatsu <mhiramat@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Laura Abbott <labbott@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86/kprobe: unbalanced preempt counter with
 CONFIG_PREEMPT=y

On Mon, 15 Jan 2018, yangbo@...pin.com wrote:

> From: Yang Bo <yangbo@...pin.com>
> 
> For each kprobe hook, preempt_disable() is called twice, but
> preempt_enable() is called once, when CONFIG_PREEMPT=y. No
> matter kprobe uses ftrace or int3. Then the preempt counter
> overflows, the process might be preempted and migrate to another
> cpu. It causes the kprobe int3 being treated as not kprobe's int3,
> kernel dies.
> 
> The backtrace looks like this:
> 
> int3: 0000 [#1] PREEMPT SMP
> Modules linkedd in: ...
> CPU: 1 PID: 2618 COMM: ...
> Hardware: ...
> task: ...
> RIP: 0010[<ffffffff81457cd5>] [<ffffffff81457cd5>] jprobe_return_end+0x0/0xb
> ...
> RIP [<ffffffff81457cd5>] jprobe_return_end+0x0/0xb
> RSP ...
> 
> Signed-off-by: Yang Bo <yangbo@...pin.com>
> ---
>  arch/x86/kernel/kprobes/core.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
> index bd36f3c33cd0..1ea5c9caa8f1 100644
> --- a/arch/x86/kernel/kprobes/core.c
> +++ b/arch/x86/kernel/kprobes/core.c
> @@ -611,6 +611,7 @@ static void setup_singlestep(struct kprobe *p, struct pt_regs *regs,
>  		regs->ip = (unsigned long)p->addr;
>  	else
>  		regs->ip = (unsigned long)p->ainsn.insn;
> +	preempt_enable_no_resched();

Good catch, but to be honest the proper fix is to move the preempt_enable()
invocation to the callsite. This kind of asymetric handling is error prone
and leads exactly to that type of bugs. Just adding more preempt_enable()
invocations is proliferating the problem.

Looking deeper into kprobe_int3_handler() there are more places which do
this completely unomprehensible conditional preempt count handling via
skip_singlestep() and setup_detour_execution(). What an unholy mess.

Masami?

Thanks,

	tglx




Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ