[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bf909ed6b01224e03f0b2770f041f5b3ecb4b218.camel@gmail.com>
Date: Wed, 10 Jan 2024 23:04:39 +0200
From: Eduard Zingerman <eddyz87@...il.com>
To: Andrii Nakryiko <andrii.nakryiko@...il.com>, Maxim Mikityanskiy
<maxtram95@...il.com>
Cc: Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann
<daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>, Shung-Hsi Yu
<shung-hsi.yu@...e.com>, John Fastabend <john.fastabend@...il.com>, Martin
KaFai Lau <martin.lau@...ux.dev>, Song Liu <song@...nel.org>, Yonghong Song
<yonghong.song@...ux.dev>, KP Singh <kpsingh@...nel.org>, Stanislav
Fomichev <sdf@...gle.com>, Hao Luo <haoluo@...gle.com>, Jiri Olsa
<jolsa@...nel.org>, Mykola Lysenko <mykolal@...com>, Shuah Khan
<shuah@...nel.org>, "David S. Miller" <davem@...emloft.net>, Jakub Kicinski
<kuba@...nel.org>, Jesper Dangaard Brouer <hawk@...nel.org>,
bpf@...r.kernel.org, linux-kselftest@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [PATCH bpf-next v2 14/15] bpf: Optimize state pruning for
spilled scalars
On Tue, 2024-01-09 at 16:22 -0800, Andrii Nakryiko wrote:
[...]
> > static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> > struct bpf_func_state *cur, struct bpf_idmap *idmap, bool exact)
> > {
> > + struct bpf_reg_state unbound_reg = {};
> > + struct bpf_reg_state zero_reg = {};
> > int i, spi;
> >
> > + __mark_reg_unknown(env, &unbound_reg);
> > + __mark_reg_const_zero(env, &zero_reg);
> > + zero_reg.precise = true;
>
> these are immutable, right? Would it make sense to set them up just
> once as static variables instead of initializing on each check?
Should be possible.
> > +
> > /* walk slots of the explored stack and ignore any additional
> > * slots in the current stack, since explored(safe) state
> > * didn't use them
> > @@ -16484,6 +16524,49 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> > continue;
> > }
> >
>
> we didn't check that cur->stack[spi] is ok to access yet, it's done a
> bit later with `if (i >= cur->allocated_stack)`, if I'm not mistaken.
> So these checks would need to be moved a bit lower, probably.
Right. And it seems the issue is already present:
if (exact &&
old->stack[spi].slot_type[i % BPF_REG_SIZE] !=
cur->stack[spi].slot_type[i % BPF_REG_SIZE])
return false;
This is currently executed before `if (i >= cur->allocated_stack)` check as well.
Introduced by another commit of mine :(
> > + /* load of stack value with all MISC and ZERO slots produces unbounded
> > + * scalar value, call regsafe to ensure scalar ids are compared.
> > + */
> > + if (is_spilled_unbound_scalar_reg64(&old->stack[spi]) &&
> > + is_stack_unbound_slot64(env, &cur->stack[spi])) {
> > + i += BPF_REG_SIZE - 1;
> > + if (!regsafe(env, &old->stack[spi].spilled_ptr, &unbound_reg,
> > + idmap, exact))
> > + return false;
> > + continue;
> > + }
> > +
> > + if (is_stack_unbound_slot64(env, &old->stack[spi]) &&
> > + is_spilled_unbound_scalar_reg64(&cur->stack[spi])) {
> > + i += BPF_REG_SIZE - 1;
> > + if (!regsafe(env, &unbound_reg, &cur->stack[spi].spilled_ptr,
> > + idmap, exact))
> > + return false;
> > + continue;
> > + }
>
> scalar_old = scalar_cur = NULL;
> if (is_spilled_unbound64(&old->..))
> scalar_old = old->stack[spi].slot_type[0] == STACK_SPILL ?
> &old->stack[spi].spilled_ptr : &unbound_reg;
> if (is_spilled_unbound64(&cur->..))
> scalar_cur = cur->stack[spi].slot_type[0] == STACK_SPILL ?
> &cur->stack[spi].spilled_ptr : &unbound_reg;
> if (scalar_old && scalar_cur) {
> if (!regsafe(env, scalar_old, scalar_new, idmap, exact)
> return false;
> i += BPF_REG_SIZE - 1;
> continue;
> }
Ok, I'll switch to this.
(Although, I think old variant is a bit simpler to follow).
> where is_spilled_unbound64() would be basically `return
> is_spilled_unbound_scalar_reg64(&old->..) ||
> is_stack_unbound_slot64(&old->...)`;
>
> Similarly for zero case? Though I'm wondering if zero case should be
> checked first, as it's actually a subset of is_spilled_unbound64 when
> it comes to STACK_ZERO/STACK_MISC mixes, no?
Yes, makes sense.
[...]
Powered by blists - more mailing lists