lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230216004859.ab66b42e2e0029cf042fe194@kernel.org>
Date:   Thu, 16 Feb 2023 00:48:59 +0900
From:   Masami Hiramatsu (Google) <mhiramat@...nel.org>
To:     Yang Jihong <yangjihong1@...wei.com>
Cc:     <tglx@...utronix.de>, <mingo@...hat.com>, <bp@...en8.de>,
        <dave.hansen@...ux.intel.com>, <x86@...nel.org>, <hpa@...or.com>,
        <naveen.n.rao@...ux.ibm.com>, <anil.s.keshavamurthy@...el.com>,
        <davem@...emloft.net>, <ast@...nel.org>, <peterz@...radead.org>,
        <linux-kernel@...r.kernel.org>,
        <linux-trace-kernel@...r.kernel.org>
Subject: Re: [PATCH 3/3] x86/kprobes: Fix arch_check_optimized_kprobe check
 within optimized_kprobe range

On Wed, 15 Feb 2023 19:54:30 +0800
Yang Jihong <yangjihong1@...wei.com> wrote:

> When arch_prepare_optimized_kprobe calculating jump destination address,
> it copies original instructions from jmp-optimized kprobe (see
> __recover_optprobed_insn), and calculated based on length of original
> instruction.
> 
> arch_check_optimized_kprobe does not check KPROBE_FLAG_OPTIMATED when
> checking whether jmp-optimized kprobe exists.
> As a result, setup_detour_execution may jump to a range that has been
> overwritten by jump destination address, resulting in an inval opcode error.

OK, good catch !! I missed "delayed unoptimization" case here too.

> 
> For example, assume that register two kprobes whose addresses are
> <func+9> and <func+11> in "func" function.
> The original code of "func" function is as follows:
> 
>    0xffffffff816cb5e9 <+9>:     push   %r12
>    0xffffffff816cb5eb <+11>:    xor    %r12d,%r12d
>    0xffffffff816cb5ee <+14>:    test   %rdi,%rdi
>    0xffffffff816cb5f1 <+17>:    setne  %r12b
>    0xffffffff816cb5f5 <+21>:    push   %rbp
> 
> 1.Register the kprobe for <func+11>, assume that is kp1, corresponding optimized_kprobe is op1.
>   After the optimization, "func" code changes to:
> 
>    0xffffffff816cc079 <+9>:     push   %r12
>    0xffffffff816cc07b <+11>:    jmp    0xffffffffa0210000
>    0xffffffff816cc080 <+16>:    incl   0xf(%rcx)
>    0xffffffff816cc083 <+19>:    xchg   %eax,%ebp
>    0xffffffff816cc084 <+20>:    (bad)
>    0xffffffff816cc085 <+21>:    push   %rbp
> 
> Now op1->flags == KPROBE_FLAG_OPTIMATED;
> 
> 2. Register the kprobe for <func+9>, assume that is kp2, corresponding optimized_kprobe is op2.
> 
> register_kprobe(kp2)
>   register_aggr_kprobe
>     alloc_aggr_kprobe
>       __prepare_optimized_kprobe
>         arch_prepare_optimized_kprobe
>           __recover_optprobed_insn    // copy original bytes from kp1->optinsn.copied_insn,
>                                       // jump address = <func+14>
> 
> 3. disable kp1:
> 
> disable_kprobe(kp1)
>   __disable_kprobe
>     ...
>     if (p == orig_p || aggr_kprobe_disabled(orig_p)) {
>       ret = disarm_kprobe(orig_p, true)       // add op1 in unoptimizing_list, not unoptimized
>       orig_p->flags |= KPROBE_FLAG_DISABLED;  // op1->flags ==  KPROBE_FLAG_OPTIMATED | KPROBE_FLAG_DISABLED
>     ...
> 
> 4. unregister kp2
> __unregister_kprobe_top
>   ...
>   if (!kprobe_disabled(ap) && !kprobes_all_disarmed) {
>     optimize_kprobe(op)
>       ...
>       if (arch_check_optimized_kprobe(op) < 0) // because op1 has KPROBE_FLAG_DISABLED, here not return
>         return;
>       p->kp.flags |= KPROBE_FLAG_OPTIMIZED;   //  now op2 has KPROBE_FLAG_OPTIMIZED
>   }
> 
> "func" code now is:
> 
>    0xffffffff816cc079 <+9>:     int3
>    0xffffffff816cc07a <+10>:    push   %rsp
>    0xffffffff816cc07b <+11>:    jmp    0xffffffffa0210000
>    0xffffffff816cc080 <+16>:    incl   0xf(%rcx)
>    0xffffffff816cc083 <+19>:    xchg   %eax,%ebp
>    0xffffffff816cc084 <+20>:    (bad)
>    0xffffffff816cc085 <+21>:    push   %rbp
> 
> 5. if call "func", int3 handler call setup_detour_execution:
> 
>   if (p->flags & KPROBE_FLAG_OPTIMIZED) {
>     ...
>     regs->ip = (unsigned long)op->optinsn.insn + TMPL_END_IDX;
>     ...
>   }
> 
> The code for the destination address is
> 
>    0xffffffffa021072c:  push   %r12
>    0xffffffffa021072e:  xor    %r12d,%r12d
>    0xffffffffa0210731:  jmp    0xffffffff816cb5ee <func+14>
> 
> However, <func+14> is not a valid start instruction address. As a result, an error occurs.

OK, it has been introduced by the same commit as previous one. (delayed unoptimization)

> 
> Signed-off-by: Yang Jihong <yangjihong1@...wei.com>
> ---
>  arch/x86/kernel/kprobes/opt.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
> index 3718d6863555..e6d9bd038401 100644
> --- a/arch/x86/kernel/kprobes/opt.c
> +++ b/arch/x86/kernel/kprobes/opt.c
> @@ -353,7 +353,7 @@ int arch_check_optimized_kprobe(struct optimized_kprobe *op)
>  
>  	for (i = 1; i < op->optinsn.size; i++) {
>  		p = get_kprobe(op->kp.addr + i);
> -		if (p && !kprobe_disabled(p))
> +		if (p && (!kprobe_disabled(p) || kprobe_optimized(p)))

Hmm, can you rewrite this with kprobe_disarmed() instead of kprobe_disabled()?
Since this is checking there are any other kprobes are "armed" on the address
where it will be replaced by jump. So it is natural to use "disarmed" check.

Thank you,


>  			return -EEXIST;
>  	}
>  
> -- 
> 2.30.GIT
> 


-- 
Masami Hiramatsu (Google) <mhiramat@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ