lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzaqOt8DyB781geXYfrosmgQCkzDOCOH8WBVmCAPs+wQBw@mail.gmail.com>
Date:   Sat, 21 Oct 2023 21:59:44 -0700
From:   Andrii Nakryiko <andrii.nakryiko@...il.com>
To:     Tao Lyu <tao.lyu@...l.ch>
Cc:     ast@...nel.org, daniel@...earbox.net, john.fastabend@...il.com,
        andrii@...nel.org, martin.lau@...ux.dev, song@...nel.org,
        yonghong.song@...ux.dev, kpsingh@...nel.org, sdf@...gle.com,
        haoluo@...gle.com, jolsa@...nel.org, mykolal@...com,
        shuah@...nel.org, security@...nel.org,
        linux-kernel@...r.kernel.org, bpf@...r.kernel.org,
        linux-kselftest@...r.kernel.org, sanidhya.kashyap@...l.ch,
        mathias.payer@...elwelt.net, meng.xu.cs@...terloo.ca,
        kartikeya.dwivedi@...l.ch
Subject: Re: [PATCH] Incorrect backtracking for load/store or atomic ops

On Fri, Oct 20, 2023 at 9:06 AM Tao Lyu <tao.lyu@...l.ch> wrote:
>
> Hi,
>
> I found the backtracking logic of the eBPF verifier is flawed
> when meeting 1) normal load and store instruction or
> 2) atomic memory instructions.
>
> # Normal load and store
>
> Here, I show one case about the normal load and store instructions,
> which can be exploited to achieve arbitrary read and write with two requirements:
> 1) The uploading program should have at least CAP_BPF, which is required for most eBPF applications.
> 2) Disable CPU mitigations by adding "mitigations=off" in the kernel booting command line. Otherwise,
> the Spectre mitigation in the eBPF verifier will prevent exploitation.
>
>                                    1: r3 = r10 (stack pointer)
>                                    3:           if cond
>                                                  /           \
>                                                /                \
>         4: *(u64 *)(r3 -120) = 200      6: *(u64 *)(r3 -120) = arbitrary offset to r2
>                  verification state 1                  verification state 2 (prune point)
>                                               \                  /
>                                                 \              /
>                                       7:  r6 = *(u64 *)(r1 -120)
>                                                          ...
>                                     17:    r7 = a map pointer
>                                     18:            r7 += r6
>                          // Out-of-bound access from the right side path
>
> Give an eBPF program (tools/testing/selftests/bpf/test_precise.c)
> whose simplified control flow graph looks like the above.
> When the verifier goes through the first (left-side) path and reaches insn 18,
> it will backtrack on register 6 like below.
>
> 18: (0f) r7 += r6
> mark_precise: frame0: last_idx 18 first_idx 17 subseq_idx -1
> mark_precise: frame0: regs=r6 stack= before 17: (bf) r7 = r0
> ...
> mark_precise: frame0: regs=r6 stack= before 7: (79) r6 = *(u64 *)(r3 -120)
>
> However, the backtracking process is problematic when it reaches insn 7.
> Insn 7 is to load a value from the stack, but the stack pointer is represented by r3 instead of r10.
> ** In this case, the verifier (as shown below) will reset the precision on r6 and not mark the precision on the stack. **
> Afterward, the backtracking finishes without annotating any registers in any verifier states.
>
>     else if (class == BPF_LDX) {
>         if (!bt_is_reg_set(bt, dreg))
>             return 0;
>         bt_clear_reg(bt, dreg);
>         if (insn->src_reg != BPF_REG_FP)
>             return 0;
>         ...
>    }
>
> Finally, when the second (left-side) path reaches insn 7 again,
> it will compare the verifier states with the previous one.
> However, it realizes these two states are equal because no precision is on r6,
> thus the eBPF program an easily pass the verifier
> although the second path contains an invalid access offset.
> We have successfully exploited this bug for getting the root privilege.
> If needed, we can share the exploitation.
> BTW, when using the similar instructions in sub_prog can also trigger an assertion in the verifier:
> "[ 1510.165537] verifier backtracking bug
> [ 1510.165582] WARNING: CPU: 2 PID: 382 at kernel/bpf/verifier.c:3626 __mark_chain_precision+0x4568/0x4e50"
>
>
>
> IMO, to fully patch this bug, we need to know whether the insn->src_reg is an alias of BPF_REG_FP.

Yes!

> However, it might need too much code addition.

No :) I don't think it's a lot of code. I've been meaning to tackle
this for a while, but perhaps the time is now.

The plan is to use jmp_history to record an extra flags for some
instructions (even if they are not jumps). Like in this case, we can
remember for LDX and STX instructions that they were doing register
load/spill, and take that into account during backtrack_insn() without
having to guess based on r10.

I have part of this ready locally, I'll try to finish this up in a
next week or two. Stay tuned (unless you want to tackle that
yourself).

> Or we just do not clear the precision on the src register.
>
> # Atomic memory instructions
>
> Then, I show that the backtracking on atomic load and store is also flawed.
> As shown below, when the backtrack_insn() function in the verifier meets store instructions,
> it checks if the stack slot is set with precision or not. If not, just return.
>
>             if (!bt_is_slot_set(bt, spi))
>                 return 0;
>             bt_clear_slot(bt, spi);
>             if (class == BPF_STX)
>                 bt_set_reg(bt, sreg);
>
> Assume we have an atomic_fetch_or instruction (tools/testing/selftests/bpf/verifier/atomic_precision.c) shown below.
>
> 7: (4c) w7 |= w3
> mark_precise: frame1: last_idx 7 first_idx 0 subseq_idx -1
> mark_precise: frame1: regs=r7 stack= before 6: (c3) r7 = atomic_fetch_or((u32 *)(r10 -120), r7)
> mark_precise: frame1: regs=r7 stack= before 5: (bf) r7 = r10
> mark_precise: frame1: regs=r10 stack= before 4: (7b) *(u64 *)(r3 -120) = r1
> mark_precise: frame1: regs=r10 stack= before 3: (bf) r3 = r10
> mark_precise: frame1: regs=r10 stack= before 2: (b7) r1 = 1000
> mark_precise: frame1: regs=r10 stack= before 0: (85) call pc+1
> BUG regs 400
>
> Before backtracking to it, r7 has already been marked as precise.
> Since the value of r7 after atomic_fecth_or comes from r10-120,
> it should propagate the precision to r10-120.
> However, because the stack slot r10-120 is not marked,
> it doesn't satisfy bt_is_slot_set(bt, spi) condition shown above.
> Finally, it just returns without marking r10-120 as precise.

this seems like the same issue with not recognizing stack access
through any other register but r10?

Or is there something specific to atomic instructions here? I'm not
very familiar with them, so I'll need to analyse the code first.

>
> This bug can lead to the verifier's assertion as well:
> "[ 1510.165537] verifier backtracking bug
> [ 1510.165582] WARNING: CPU: 2 PID: 382 at kernel/bpf/verifier.c:3626 __mark_chain_precision+0x4568/0x4e50"
>
> I've attached the patch for correctly propagating the precision on atomic instructions.
> But it still can't solve the problem that the stack slot is expressed with other registers instead of r10.

I can try to solve stack access through non-r10, but it would be very
useful if you could provide tests that trigger the above situations
you described. Your test_precise.c test below is not the right way to
add tests, it adds a new binary and generally doesn't fit into
existing set of tests inside test_progs. Please see
progs/verifier_xadd.c and prog_tests/verifier.c and try to convert
your tests into that form (you also will be able to use inline
assembly instead of painful BPF_xxx() instruction macros).

Thanks.

>
> Signed-off-by: Tao Lyu <tao.lyu@...l.ch>
> ---
>  kernel/bpf/verifier.c                         |  58 +++++-
>  tools/testing/selftests/bpf/Makefile          |   6 +-
>  tools/testing/selftests/bpf/test_precise.c    | 186 ++++++++++++++++++
>  .../selftests/bpf/verifier/atomic_precision.c |  19 ++
>  4 files changed, 263 insertions(+), 6 deletions(-)
>  create mode 100644 tools/testing/selftests/bpf/test_precise.c
>  create mode 100644 tools/testing/selftests/bpf/verifier/atomic_precision.c
>

[...]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ