lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1b0e59ee87b765513c6488112e6e3e3cf4af7cb6.camel@gmail.com>
Date: Thu, 12 Dec 2024 20:04:45 -0800
From: Eduard Zingerman <eddyz87@...il.com>
To: Daniel Xu <dxu@...uu.xyz>, andrii@...nel.org, ast@...nel.org, 
	shuah@...nel.org, daniel@...earbox.net
Cc: john.fastabend@...il.com, martin.lau@...ux.dev, song@...nel.org, 
	yonghong.song@...ux.dev, kpsingh@...nel.org, sdf@...ichev.me,
 haoluo@...gle.com, 	jolsa@...nel.org, mykolal@...com, bpf@...r.kernel.org, 
	linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org, 
	netdev@...r.kernel.org
Subject: Re: [PATCH bpf-next v5 4/5] bpf: verifier: Support eliding map
 lookup nullness

On Thu, 2024-12-12 at 16:22 -0700, Daniel Xu wrote:

I think these changes are fine in general, but see below.

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 58b36cc96bd5..4947ef884a18 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -287,6 +287,7 @@ struct bpf_call_arg_meta {
>  	u32 ret_btf_id;
>  	u32 subprogno;
>  	struct btf_field *kptr_field;
> +	s64 const_map_key;
>  };
>  
>  struct bpf_kfunc_call_arg_meta {
> @@ -9163,6 +9164,53 @@ static int check_reg_const_str(struct bpf_verifier_env *env,
>  	return 0;
>  }
>  
> +/* Returns constant key value if possible, else -1 */
> +static s64 get_constant_map_key(struct bpf_verifier_env *env,
> +				struct bpf_reg_state *key,
> +				u32 key_size)

I understand that this is not your use case, but maybe generalize this
a bit by checking maximal register value instead of a constant?

> +{
> +	struct bpf_func_state *state = func(env, key);
> +	struct bpf_reg_state *reg;
> +	int zero_size = 0;
> +	int stack_off;
> +	u8 *stype;
> +	int slot;
> +	int spi;
> +	int i;
> +
> +	if (!env->bpf_capable)
> +		return -1;
> +	if (key->type != PTR_TO_STACK)
> +		return -1;
> +	if (!tnum_is_const(key->var_off))
> +		return -1;
> +
> +	stack_off = key->off + key->var_off.value;
> +	slot = -stack_off - 1;
> +	spi = slot / BPF_REG_SIZE;
> +
> +	/* First handle precisely tracked STACK_ZERO, up to BPF_REG_SIZE */
> +	stype = state->stack[spi].slot_type;
> +	for (i = 0; i < BPF_REG_SIZE && stype[i] == STACK_ZERO; i++)
> +		zero_size++;
> +	if (zero_size == key_size)
> +		return 0;
> +
> +	if (!is_spilled_reg(&state->stack[spi]))
> +		/* Not pointer to stack */
> +		return -1;

Nit: there is a 'is_spilled_scalar_reg' utility function.

> +
> +	reg = &state->stack[spi].spilled_ptr;
> +	if (reg->type != SCALAR_VALUE)
> +		/* Only scalars are valid array map keys */
> +		return -1;
> +	else if (!tnum_is_const(reg->var_off))
> +		/* Stack value not statically known */
> +		return -1;

I think you need to check if size of the spill matches the size of the key.
The mismatch would be unsafe when spill size is smaller than key size.
E.g. consider 1-byte spill with mask 'mmmmmmrr' and a 4-byte key,
at runtime the 'mmmmmm' part might be non-zero, rendering key to be
out of range.

> +
> +	return reg->var_off.value;
> +}
> +
>  static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>  			  struct bpf_call_arg_meta *meta,
>  			  const struct bpf_func_proto *fn,

[...]


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ