lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 14 Dec 2023 18:16:50 -0800
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Eduard Zingerman <eddyz87@...il.com>
Cc: Andrii Nakryiko <andrii.nakryiko@...il.com>, Hao Sun <sunhao.th@...il.com>, 
	Alexei Starovoitov <ast@...nel.org>, Andrii Nakryiko <andrii@...nel.org>, 
	Daniel Borkmann <daniel@...earbox.net>, bpf <bpf@...r.kernel.org>, 
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [Bug Report] bpf: incorrectly pruning runtime execution path

On Thu, Dec 14, 2023 at 5:24 PM Eduard Zingerman <eddyz87@...il.com> wrote:
>
> On Fri, 2023-12-15 at 02:49 +0200, Eduard Zingerman wrote:
> > On Thu, 2023-12-14 at 16:06 -0800, Andrii Nakryiko wrote:
> > [...]
> > > If you agree with the analysis, we can start discussing what's the
> > > best way to fix this.
> >
> > Ok, yeap, I agree with you.
> > Backtracker marks both registers in 'if' statement if one of them is
> > tracked, but r8 is not marked at block entry and we miss r0.
>
> The brute-force solution is to keep a special mask for each
> conditional jump in jump history. In this mask, mark all registers and
> stack slots that gained range because of find_equal_scalars() executed
> for this conditional jump. Use this mask to extend precise registers set.
> However, such mask would be prohibitively large: (10+64)*8 bits.
>
> ---
>
> Here is an option that would fix the test in question, but I'm not
> sure if it covers all cases:
> 1. At the last instruction of each state (first instruction to be
>    backtracked) we know the set of IDs that should be tracked for
>    precision, as currently marked by mark_precise_scalar_ids().
> 2. In jump history we can record IDs for src and dst registers when new
>    entry is pushed.
> 3. While backtracking 'if' statement, if one of the recorded IDs is in
>    the set identified at (1), add src/dst regs to precise registers set.
>
> E.g. for the test-case at hand:
>
>   0: (85) call bpf_get_prandom_u32#7    ; R0=scalar()
>   1: (bf) r7 = r0                       ; R0=scalar(id=1) R7_w=scalar(id=1)
>   2: (bf) r8 = r0                       ; R0=scalar(id=1) R8_w=scalar(id=1)
>   3: (85) call bpf_get_prandom_u32#7    ; R0=scalar()
>   --- checkpoint #1 r7.id = 1, r8.id = 1 ---
>   4: (25) if r0 > 0x1 goto pc+0         ; R0=scalar(smin=smin32=0,smax=umax=smax32=umax32=1,...)
>   --- checkpoint #2 r7.id = 1, r8.id = 1 ---
>   5: (3d) if r8 >= r0 goto pc+3         ; R0=1 R8=0 | record r8.id=1 in jump history
>   6: (0f) r8 += r8                      ; R8=0

can we detect that any register link is broken and force checkpoint here?

>   --- checkpoint #3 r7.id = 1, r8.id = 0 ---
>   7: (15) if r7 == 0x0 goto pc+1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ