[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cs4dybyzefyyk3e7zzec6mayt6re773ouo3svar4kcoxd6rakg@qodmhcted3mm>
Date: Fri, 13 Dec 2024 13:57:19 -0700
From: Daniel Xu <dxu@...uu.xyz>
To: Eduard Zingerman <eddyz87@...il.com>
Cc: andrii@...nel.org, ast@...nel.org, shuah@...nel.org,
daniel@...earbox.net, john.fastabend@...il.com, martin.lau@...ux.dev, song@...nel.org,
yonghong.song@...ux.dev, kpsingh@...nel.org, sdf@...ichev.me, haoluo@...gle.com,
jolsa@...nel.org, mykolal@...com, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH bpf-next v5 4/5] bpf: verifier: Support eliding map
lookup nullness
On Thu, Dec 12, 2024 at 08:04:45PM GMT, Eduard Zingerman wrote:
> On Thu, 2024-12-12 at 16:22 -0700, Daniel Xu wrote:
>
> I think these changes are fine in general, but see below.
>
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 58b36cc96bd5..4947ef884a18 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -287,6 +287,7 @@ struct bpf_call_arg_meta {
> > u32 ret_btf_id;
> > u32 subprogno;
> > struct btf_field *kptr_field;
> > + s64 const_map_key;
> > };
> >
> > struct bpf_kfunc_call_arg_meta {
> > @@ -9163,6 +9164,53 @@ static int check_reg_const_str(struct bpf_verifier_env *env,
> > return 0;
> > }
> >
> > +/* Returns constant key value if possible, else -1 */
> > +static s64 get_constant_map_key(struct bpf_verifier_env *env,
> > + struct bpf_reg_state *key,
> > + u32 key_size)
>
> I understand that this is not your use case, but maybe generalize this
> a bit by checking maximal register value instead of a constant?
I'll check on this. If it works I think you're right - it allows more
flexibility while retaining safety. User could define max_entries to be
a power of two and then mask key with with 0xFFFF.. to guarantee null
free codepaths.
>
> > +{
> > + struct bpf_func_state *state = func(env, key);
> > + struct bpf_reg_state *reg;
> > + int zero_size = 0;
> > + int stack_off;
> > + u8 *stype;
> > + int slot;
> > + int spi;
> > + int i;
> > +
> > + if (!env->bpf_capable)
> > + return -1;
> > + if (key->type != PTR_TO_STACK)
> > + return -1;
> > + if (!tnum_is_const(key->var_off))
> > + return -1;
> > +
> > + stack_off = key->off + key->var_off.value;
> > + slot = -stack_off - 1;
> > + spi = slot / BPF_REG_SIZE;
> > +
> > + /* First handle precisely tracked STACK_ZERO, up to BPF_REG_SIZE */
> > + stype = state->stack[spi].slot_type;
> > + for (i = 0; i < BPF_REG_SIZE && stype[i] == STACK_ZERO; i++)
> > + zero_size++;
> > + if (zero_size == key_size)
> > + return 0;
> > +
> > + if (!is_spilled_reg(&state->stack[spi]))
> > + /* Not pointer to stack */
> > + return -1;
>
> Nit: there is a 'is_spilled_scalar_reg' utility function.
Ack.
>
> > +
> > + reg = &state->stack[spi].spilled_ptr;
> > + if (reg->type != SCALAR_VALUE)
> > + /* Only scalars are valid array map keys */
> > + return -1;
> > + else if (!tnum_is_const(reg->var_off))
> > + /* Stack value not statically known */
> > + return -1;
>
> I think you need to check if size of the spill matches the size of the key.
> The mismatch would be unsafe when spill size is smaller than key size.
> E.g. consider 1-byte spill with mask 'mmmmmmrr' and a 4-byte key,
> at runtime the 'mmmmmm' part might be non-zero, rendering key to be
> out of range.
Ah great catch. I think you're right.
Powered by blists - more mailing lists