lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <gpqzoa2kvemzeuwpc2q4jnlcgscut5ouz7gcnd3e5my7vuml4a@bhhditb2jzq5>
Date: Thu, 9 Jan 2025 16:30:53 -0700
From: Daniel Xu <dxu@...uu.xyz>
To: Eduard Zingerman <eddyz87@...il.com>
Cc: andrii@...nel.org, ast@...nel.org, shuah@...nel.org, 
	daniel@...earbox.net, john.fastabend@...il.com, martin.lau@...ux.dev, song@...nel.org, 
	yonghong.song@...ux.dev, kpsingh@...nel.org, sdf@...ichev.me, haoluo@...gle.com, 
	jolsa@...nel.org, mykolal@...com, bpf@...r.kernel.org, 
	linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org
Subject: Re: [PATCH bpf-next v6 4/5] bpf: verifier: Support eliding map
 lookup nullness

On Thu, Jan 02, 2025 at 06:53:54PM -0800, Eduard Zingerman wrote:
> On Thu, 2024-12-19 at 21:09 -0700, Daniel Xu wrote:
> 
> lgtm, but please see a note below.
> 
> [...]
> 
> > +/* Returns constant key value if possible, else negative error */
> > +static s64 get_constant_map_key(struct bpf_verifier_env *env,
> > +				struct bpf_reg_state *key,
> > +				u32 key_size)
> > +{
> > +	struct bpf_func_state *state = func(env, key);
> > +	struct bpf_reg_state *reg;
> > +	int slot, spi, off;
> > +	int spill_size = 0;
> > +	int zero_size = 0;
> > +	int stack_off;
> > +	int i, err;
> > +	u8 *stype;
> > +
> > +	if (!env->bpf_capable)
> > +		return -EOPNOTSUPP;
> > +	if (key->type != PTR_TO_STACK)
> > +		return -EOPNOTSUPP;
> > +	if (!tnum_is_const(key->var_off))
> > +		return -EOPNOTSUPP;
> > +
> > +	stack_off = key->off + key->var_off.value;
> > +	slot = -stack_off - 1;
> > +	spi = slot / BPF_REG_SIZE;
> > +	off = slot % BPF_REG_SIZE;
> > +	stype = state->stack[spi].slot_type;
> > +
> > +	/* First handle precisely tracked STACK_ZERO */
> > +	for (i = off; i >= 0 && stype[i] == STACK_ZERO; i--)
> > +		zero_size++;
> > +	if (zero_size >= key_size)
> > +		return 0;
> > +
> > +	/* Check that stack contains a scalar spill of expected size */
> > +	if (!is_spilled_scalar_reg(&state->stack[spi]))
> > +		return -EOPNOTSUPP;
> > +	for (i = off; i >= 0 && stype[i] == STACK_SPILL; i--)
> > +		spill_size++;
> > +	if (spill_size != key_size)
> > +		return -EOPNOTSUPP;
> > +
> > +	reg = &state->stack[spi].spilled_ptr;
> > +	if (!tnum_is_const(reg->var_off))
> > +		/* Stack value not statically known */
> > +		return -EOPNOTSUPP;
> > +
> > +	/* We are relying on a constant value. So mark as precise
> > +	 * to prevent pruning on it.
> > +	 */
> > +	bt_set_frame_slot(&env->bt, env->cur_state->curframe, spi);
> 
> I think env->cur_state->curframe is not always correct here.
> It should be key->frameno, as key might point a few stack frames up.

Ack, nice catch.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ