lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzaKS_f9uOorcCbsoui9KNRA0HxX8=A+9sNAq+1mJoy-kg@mail.gmail.com>
Date: Wed, 10 Jan 2024 13:52:12 -0800
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Eduard Zingerman <eddyz87@...il.com>
Cc: Maxim Mikityanskiy <maxtram95@...il.com>, Alexei Starovoitov <ast@...nel.org>, 
	Daniel Borkmann <daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>, 
	Shung-Hsi Yu <shung-hsi.yu@...e.com>, John Fastabend <john.fastabend@...il.com>, 
	Martin KaFai Lau <martin.lau@...ux.dev>, Song Liu <song@...nel.org>, 
	Yonghong Song <yonghong.song@...ux.dev>, KP Singh <kpsingh@...nel.org>, 
	Stanislav Fomichev <sdf@...gle.com>, Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>, 
	Mykola Lysenko <mykolal@...com>, Shuah Khan <shuah@...nel.org>, 
	"David S. Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, 
	Jesper Dangaard Brouer <hawk@...nel.org>, bpf@...r.kernel.org, linux-kselftest@...r.kernel.org, 
	netdev@...r.kernel.org
Subject: Re: [PATCH bpf-next v2 14/15] bpf: Optimize state pruning for spilled scalars

On Wed, Jan 10, 2024 at 1:04 PM Eduard Zingerman <eddyz87@...il.com> wrote:
>
> On Tue, 2024-01-09 at 16:22 -0800, Andrii Nakryiko wrote:
> [...]
> > >  static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> > >                       struct bpf_func_state *cur, struct bpf_idmap *idmap, bool exact)
> > >  {
> > > +       struct bpf_reg_state unbound_reg = {};
> > > +       struct bpf_reg_state zero_reg = {};
> > >         int i, spi;
> > >
> > > +       __mark_reg_unknown(env, &unbound_reg);
> > > +       __mark_reg_const_zero(env, &zero_reg);
> > > +       zero_reg.precise = true;
> >
> > these are immutable, right? Would it make sense to set them up just
> > once as static variables instead of initializing on each check?
>
> Should be possible.
>
> > > +
> > >         /* walk slots of the explored stack and ignore any additional
> > >          * slots in the current stack, since explored(safe) state
> > >          * didn't use them
> > > @@ -16484,6 +16524,49 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> > >                         continue;
> > >                 }
> > >
> >
> > we didn't check that cur->stack[spi] is ok to access yet, it's done a
> > bit later with `if (i >= cur->allocated_stack)`, if I'm not mistaken.
> > So these checks would need to be moved a bit lower, probably.
>
> Right. And it seems the issue is already present:
>
>                 if (exact &&
>                     old->stack[spi].slot_type[i % BPF_REG_SIZE] !=
>                     cur->stack[spi].slot_type[i % BPF_REG_SIZE])
>                         return false;
>
> This is currently executed before `if (i >= cur->allocated_stack)` check as well.
> Introduced by another commit of mine :(

I guess we'll need to move that too, then

>
> > > +               /* load of stack value with all MISC and ZERO slots produces unbounded
> > > +                * scalar value, call regsafe to ensure scalar ids are compared.
> > > +                */
> > > +               if (is_spilled_unbound_scalar_reg64(&old->stack[spi]) &&
> > > +                   is_stack_unbound_slot64(env, &cur->stack[spi])) {
> > > +                       i += BPF_REG_SIZE - 1;
> > > +                       if (!regsafe(env, &old->stack[spi].spilled_ptr, &unbound_reg,
> > > +                                    idmap, exact))
> > > +                               return false;
> > > +                       continue;
> > > +               }
> > > +
> > > +               if (is_stack_unbound_slot64(env, &old->stack[spi]) &&
> > > +                   is_spilled_unbound_scalar_reg64(&cur->stack[spi])) {
> > > +                       i += BPF_REG_SIZE - 1;
> > > +                       if (!regsafe(env,  &unbound_reg, &cur->stack[spi].spilled_ptr,
> > > +                                    idmap, exact))
> > > +                               return false;
> > > +                       continue;
> > > +               }
> >
> > scalar_old = scalar_cur = NULL;
> > if (is_spilled_unbound64(&old->..))
> >     scalar_old = old->stack[spi].slot_type[0] == STACK_SPILL ?
> > &old->stack[spi].spilled_ptr : &unbound_reg;
> > if (is_spilled_unbound64(&cur->..))
> >     scalar_cur = cur->stack[spi].slot_type[0] == STACK_SPILL ?
> > &cur->stack[spi].spilled_ptr : &unbound_reg;
> > if (scalar_old && scalar_cur) {
> >     if (!regsafe(env, scalar_old, scalar_new, idmap, exact)
> >         return false;
> >     i += BPF_REG_SIZE - 1;
> >     continue;
> > }
>
> Ok, I'll switch to this.
> (Although, I think old variant is a bit simpler to follow).

my goal was to eliminate duplicated logic inside each if and kind of
showing at high level that we are comparing two "logically unbound
scalars", regardless of whether that's STACK_xxx mix or spilled
scalar.

I haven't thought this through, but if we can simplify further to
something like this:

if (is_spilled_unbound64(old) && is_spilled_unbound64(cur)) {
  scalar_cur = ...
  scalar_old = ...
  if (!regsafe(...))
    return false;
  i += BPF_REG_SIZE - 1;
}

In general, this symmetry in two consecutive if conditions seems like
an opportunity to simplify. But if you think it's more complicated,
I'm fine with leaving it as is.

>
> > where is_spilled_unbound64() would be basically `return
> > is_spilled_unbound_scalar_reg64(&old->..) ||
> > is_stack_unbound_slot64(&old->...)`;
> >
> > Similarly for zero case? Though I'm wondering if zero case should be
> > checked first, as it's actually a subset of is_spilled_unbound64 when
> > it comes to STACK_ZERO/STACK_MISC mixes, no?
>
> Yes, makes sense.
>
> [...]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ