lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180423001615.wlxnlp6xdquzrntt@ast-mbp>
Date:   Sun, 22 Apr 2018 18:16:16 -0600
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Yonghong Song <yhs@...com>
Cc:     ast@...com, daniel@...earbox.net, netdev@...r.kernel.org,
        kernel-team@...com
Subject: Re: [PATCH bpf-next v3 4/9] bpf/verifier: improve register value
 range tracking with ARSH

On Fri, Apr 20, 2018 at 03:18:37PM -0700, Yonghong Song wrote:
> When helpers like bpf_get_stack returns an int value
> and later on used for arithmetic computation, the LSH and ARSH
> operations are often required to get proper sign extension into
> 64-bit. For example, without this patch:
>     54: R0=inv(id=0,umax_value=800)
>     54: (bf) r8 = r0
>     55: R0=inv(id=0,umax_value=800) R8_w=inv(id=0,umax_value=800)
>     55: (67) r8 <<= 32
>     56: R8_w=inv(id=0,umax_value=3435973836800,var_off=(0x0; 0x3ff00000000))
>     56: (c7) r8 s>>= 32
>     57: R8=inv(id=0)
> With this patch:
>     54: R0=inv(id=0,umax_value=800)
>     54: (bf) r8 = r0
>     55: R0=inv(id=0,umax_value=800) R8_w=inv(id=0,umax_value=800)
>     55: (67) r8 <<= 32
>     56: R8_w=inv(id=0,umax_value=3435973836800,var_off=(0x0; 0x3ff00000000))
>     56: (c7) r8 s>>= 32
>     57: R8=inv(id=0, umax_value=800,var_off=(0x0; 0x3ff))
> With better range of "R8", later on when "R8" is added to other register,
> e.g., a map pointer or scalar-value register, the better register
> range can be derived and verifier failure may be avoided.
> 
> In our later example,
>     ......
>     usize = bpf_get_stack(ctx, raw_data, max_len, BPF_F_USER_STACK);
>     if (usize < 0)
>         return 0;
>     ksize = bpf_get_stack(ctx, raw_data + usize, max_len - usize, 0);
>     ......
> Without improving ARSH value range tracking, the register representing
> "max_len - usize" will have smin_value equal to S64_MIN and will be
> rejected by verifier.
> 
> Signed-off-by: Yonghong Song <yhs@...com>
> ---
>  kernel/bpf/verifier.c | 26 ++++++++++++++++++++++++++
>  1 file changed, 26 insertions(+)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 3c8bb92..01c215d 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -2975,6 +2975,32 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
>  		/* We may learn something more from the var_off */
>  		__update_reg_bounds(dst_reg);
>  		break;
> +	case BPF_ARSH:
> +		if (umax_val >= insn_bitness) {
> +			/* Shifts greater than 31 or 63 are undefined.
> +			 * This includes shifts by a negative number.
> +			 */
> +			mark_reg_unknown(env, regs, insn->dst_reg);
> +			break;
> +		}
> +		if (dst_reg->smin_value < 0)
> +			dst_reg->smin_value >>= umin_val;
> +		else
> +			dst_reg->smin_value >>= umax_val;
> +		if (dst_reg->smax_value < 0)
> +			dst_reg->smax_value >>= umax_val;
> +		else
> +			dst_reg->smax_value >>= umin_val;
> +		if (src_known)
> +			dst_reg->var_off = tnum_rshift(dst_reg->var_off,
> +						       umin_val);
> +		else
> +			dst_reg->var_off = tnum_rshift(tnum_unknown, umin_val);
> +		dst_reg->umin_value >>= umax_val;
> +		dst_reg->umax_value >>= umin_val;
> +		/* We may learn something more from the var_off */
> +		__update_reg_bounds(dst_reg);

I'm struggling to understand how these bounds are computed.
Could you add examples in the comments?
In particular if dst_reg is unknown (tnum.mask == -1)
the above tnum_rshift() will clear upper bits and will make it
64-bit positive, but that doesn't seem correct.
What am I missing?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ