lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180823214109.5b8f5756@vmware.local.home>
Date:   Thu, 23 Aug 2018 21:41:09 -0400
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Masami Hiramatsu <mhiramat@...nel.org>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Ravi Bangoria <ravi.bangoria@...ux.ibm.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Michael Rodin <michael@...in.online>,
        linux-kernel@...r.kernel.org
Subject: Re: [BUGFIX PATCH -tip] kprobes/x86: Fix to copy RIP relative
 instruction correctly

On Fri, 24 Aug 2018 02:16:12 +0900
Masami Hiramatsu <mhiramat@...nel.org> wrote:

> Dump of assembler code from 0xffffffffa000207a to 0xffffffffa00020ea:
> 	54	push   %rsp
> ...
> 	48 83 c4 08	add    $0x8,%rsp
> 	9d	popfq
> 	48 89 f0	mov    %rsi,%rax
> 	8b 35 82 7d db e2	mov    -0x1d24827e(%rip),%esi
>         # 0xffffffff82db9e67 <nr_cpu_ids+3>
> 
> As it shows, the 2nd mov accesses *(nr_cpu_ids+3) instead of
> *nr_cpu_ids. This leads a kernel freeze because cpumask_next()
> always returns 0 and for_each_cpu() never ended.

Ouch! Nice catch.

> 
> Fixing this by adding len correctly to real RIP address while
> copying.
> 
> Fixes: 63fef14fc98a ("kprobes/x86: Make insn buffer always ROX and use text_poke()")
> Reported-by: Michael Rodin <michael@...in.online>
> Signed-off-by: Masami Hiramatsu <mhiramat@...nel.org>
> Cc: stable@...r.kernel.org
> ---
>  arch/x86/kernel/kprobes/opt.c |    3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
> index eaf02f2e7300..e92672b8b490 100644
> --- a/arch/x86/kernel/kprobes/opt.c
> +++ b/arch/x86/kernel/kprobes/opt.c
> @@ -189,7 +189,8 @@ static int copy_optimized_instructions(u8 *dest, u8 *src, u8 *real)
>  	int len = 0, ret;
>  
>  	while (len < RELATIVEJUMP_SIZE) {
> -		ret = __copy_instruction(dest + len, src + len, real, &insn);
> +		ret = __copy_instruction(dest + len, src + len, real + len,
> +			&insn);
>  		if (!ret || !can_boost(&insn, src + len))
>  			return -EINVAL;
>  		len += ret;

Looking at the change that broke this we have:

> -static int copy_optimized_instructions(u8 *dest, u8 *src)
> +static int copy_optimized_instructions(u8 *dest, u8 *src, u8 *real)
>  {
>         struct insn insn;
>         int len = 0, ret;
>  
>         while (len < RELATIVEJUMP_SIZE) {
> -               ret = __copy_instruction(dest + len, src + len, &insn);
> +               ret = __copy_instruction(dest + len, src + len, real, &insn);

Where "real" was added as a parameter to __copy_instruction. Note that
we pass in "dest + len" but not "real + len" as you patch fixes.
__copy_instruction was changed by the bad commit with:

> -int __copy_instruction(u8 *dest, u8 *src, struct insn *insn)
> +int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn)
>  {
>         kprobe_opcode_t buf[MAX_INSN_SIZE];
>         unsigned long recovered_insn =
> @@ -387,11 +388,11 @@ int __copy_instruction(u8 *dest, u8 *src, struct insn *insn)
>                  * have given.
>                  */
>                 newdisp = (u8 *) src + (s64) insn->displacement.value
> -                         - (u8 *) dest;
> +                         - (u8 *) real;

"real" replaces "dest", which was the first parameter to __copy_instruction.

>                         return 0;

And:

>  int arch_prepare_optimized_kprobe(struct optimized_kprobe *op,
>                                   struct kprobe *__unused)
>  {
> -       u8 *buf;
> -       int ret;
> +       u8 *buf = NULL, *slot;
> +       int ret, len;
>         long rel;
>  
>         if (!can_optimize((unsigned long)op->kp.addr))
>                 return -EILSEQ;
>  
> -       op->optinsn.insn = get_optinsn_slot();
> -       if (!op->optinsn.insn)
> +       buf = kzalloc(MAX_OPTINSN_SIZE, GFP_KERNEL);
> +       if (!buf)
>                 return -ENOMEM;
>  
> +       op->optinsn.insn = slot = get_optinsn_slot();
> +       if (!slot) {
> +               ret = -ENOMEM;
> +               goto out;
> +       }
> +
>         /*
>          * Verify if the address gap is in 2GB range, because this uses
>          * a relative jump.
>          */
> -       rel = (long)op->optinsn.insn - (long)op->kp.addr + RELATIVEJUMP_SIZE;
> +       rel = (long)slot - (long)op->kp.addr + RELATIVEJUMP_SIZE;
>         if (abs(rel) > 0x7fffffff) {
> -               __arch_remove_optimized_kprobe(op, 0);
> -               return -ERANGE;
> +               ret = -ERANGE;
> +               goto err;
>         }
>  
> -       buf = (u8 *)op->optinsn.insn;

"slot" is equivalent to the old "buf".

> -       set_memory_rw((unsigned long)buf & PAGE_MASK, 1);
> +       /* Copy arch-dep-instance from template */
> +       memcpy(buf, &optprobe_template_entry, TMPL_END_IDX);
>  
>         /* Copy instructions into the out-of-line buffer */
> -       ret = copy_optimized_instructions(buf + TMPL_END_IDX, op->kp.addr);
> -       if (ret < 0) {
> -               __arch_remove_optimized_kprobe(op, 0);
> -               return ret;
> -       }
> +       ret = copy_optimized_instructions(buf + TMPL_END_IDX, op->kp.addr,
> +                                         slot + TMPL_END_IDX);

We pass in "real" as "slot + TMPL_END_IDX" and "dest" as "buf +
TMPL_END_IDX", thus to make it be equivalent to the code before this
commit, "real" should have "+ len" added to it in order to be
equivalent to what was there before.

That said...

Reviewed-by: Steven Rostedt (VMware) <rostedt@...dmis.org>

-- Steve


> +       if (ret < 0)
> +               goto err;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ