lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 25 Mar 2020 23:20:01 -0700
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     John Fastabend <john.fastabend@...il.com>
Cc:     ecree@...arflare.com, yhs@...com, daniel@...earbox.net,
        netdev@...r.kernel.org, bpf@...r.kernel.org
Subject: Re: [bpf-next PATCH 04/10] bpf: verifier, do explicit ALU32 bounds
 tracking

On Tue, Mar 24, 2020 at 10:38:56AM -0700, John Fastabend wrote:
> -static void __reg_bound_offset32(struct bpf_reg_state *reg)
> +static void __reg_combine_32_into_64(struct bpf_reg_state *reg)
>  {
> -	u64 mask = 0xffffFFFF;
> -	struct tnum range = tnum_range(reg->umin_value & mask,
> -				       reg->umax_value & mask);
> -	struct tnum lo32 = tnum_cast(reg->var_off, 4);
> -	struct tnum hi32 = tnum_lshift(tnum_rshift(reg->var_off, 32), 32);
> +	/* special case when 64-bit register has upper 32-bit register
> +	 * zeroed. Typically happens after zext or <<32, >>32 sequence
> +	 * allowing us to use 32-bit bounds directly,
> +	 */
> +	if (tnum_equals_const(tnum_clear_subreg(reg->var_off), 0)) {
> +		reg->umin_value = reg->u32_min_value;
> +		reg->umax_value = reg->u32_max_value;
> +		reg->smin_value = reg->s32_min_value;
> +		reg->smax_value = reg->s32_max_value;

Looks like above will not be correct for negative s32_min/max.
When upper 32-bit are cleared and we're processing jmp32
we cannot set smax_value to s32_max_value.
Consider if (w0 s< -5)
s32_max_value == -5
which is 0xfffffffb
but upper 32 are zeros so smax_value should be (u64)0xfffffffb
and not (s64)-5

We can be fancy and precise with this logic, but I would just use similar
approach from zext_32_to_64() where the following:
+       if (reg->s32_min_value > 0)
+               reg->smin_value = reg->s32_min_value;
+       else
+               reg->smin_value = 0;
+       if (reg->s32_max_value > 0)
+               reg->smax_value = reg->s32_max_value;
+       else
+               reg->smax_value = U32_MAX;
should work for this case too ?

> +	if (BPF_SRC(insn->code) == BPF_K) {
> +		pred = is_branch_taken(dst_reg, insn->imm, opcode, is_jmp32);
> +	} else if (src_reg->type == SCALAR_VALUE && is_jmp32 && tnum_is_const(tnum_subreg(src_reg->var_off))) {
> +		pred = is_branch_taken(dst_reg, tnum_subreg(src_reg->var_off).value, opcode, is_jmp32);
> +	} else if (src_reg->type == SCALAR_VALUE && !is_jmp32 && tnum_is_const(src_reg->var_off)) {
> +		pred = is_branch_taken(dst_reg, src_reg->var_off.value, opcode, is_jmp32);
> +	}

pls wrap these lines. Way above normal.

The rest is awesome.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ