[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191115042939.ckt4fqvtfdi344y2@ast-mbp.dhcp.thefacebook.com>
Date: Thu, 14 Nov 2019 20:29:41 -0800
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Daniel Borkmann <daniel@...earbox.net>
Cc: ast@...nel.org, john.fastabend@...il.com, netdev@...r.kernel.org,
bpf@...r.kernel.org
Subject: Re: [PATCH rfc bpf-next 8/8] bpf: constant map key tracking for prog
array pokes
On Fri, Nov 15, 2019 at 02:04:02AM +0100, Daniel Borkmann wrote:
> Add tracking of constant keys into tail call maps. The signature of
> bpf_tail_call_proto is that arg1 is ctx, arg2 map pointer and arg3
> is a index key. The direct call approach for tail calls can be enabled
> if the verifier asserted that for all branches leading to the tail call
> helper invocation, the map pointer and index key were both constant
> and the same. Tracking of map pointers we already do from prior work
> via c93552c443eb ("bpf: properly enforce index mask to prevent out-of-bounds
> speculation") and 09772d92cd5a ("bpf: avoid retpoline for lookup/update/
> delete calls on maps"). Given the tail call map index key is not on
> stack but directly in the register, we can add similar tracking approach
> and later in fixup_bpf_calls() add a poke descriptor to the progs poke_tab
> with the relevant information for the JITing phase. We internally reuse
> insn->imm for the rewritten BPF_JMP | BPF_TAIL_CALL instruction in order
> to point into the prog's poke_tab and keep insn->imm == 0 as indicator
> that current indirect tail call emission must be used.
>
> Signed-off-by: Daniel Borkmann <daniel@...earbox.net>
> ---
> include/linux/bpf_verifier.h | 1 +
> kernel/bpf/verifier.c | 98 ++++++++++++++++++++++++++++++++++++
> 2 files changed, 99 insertions(+)
>
> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> index cdd08bf0ec06..f494f0c9ac13 100644
> --- a/include/linux/bpf_verifier.h
> +++ b/include/linux/bpf_verifier.h
> @@ -301,6 +301,7 @@ struct bpf_insn_aux_data {
> u32 map_off; /* offset from value base address */
> };
> };
> + u64 key_state; /* constant key tracking for maps */
may be map_key_state ?
key_state is a bit ambiguous in the bpf_insn_aux_data.
> +static int
> +record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
> + int func_id, int insn_idx)
> +{
> + struct bpf_insn_aux_data *aux = &env->insn_aux_data[insn_idx];
> + struct bpf_reg_state *regs = cur_regs(env), *reg;
> + struct tnum range = tnum_range(0, U32_MAX);
> + struct bpf_map *map = meta->map_ptr;
> + u64 val;
> +
> + if (func_id != BPF_FUNC_tail_call)
> + return 0;
> + if (!map || map->map_type != BPF_MAP_TYPE_PROG_ARRAY) {
> + verbose(env, "kernel subsystem misconfigured verifier\n");
> + return -EINVAL;
> + }
> +
> + reg = ®s[BPF_REG_3];
> + if (!register_is_const(reg) || !tnum_in(range, reg->var_off)) {
> + bpf_map_key_store(aux, BPF_MAP_KEY_POISON);
> + return 0;
> + }
> +
> + val = reg->var_off.value;
> + if (bpf_map_key_unseen(aux))
> + bpf_map_key_store(aux, val);
> + else if (bpf_map_key_immediate(aux) != val)
> + bpf_map_key_store(aux, BPF_MAP_KEY_POISON);
> + return 0;
> +}
I think this analysis is very useful in other cases as well. Could you
generalize it for array map lookups ? The key used in bpf_map_lookup_elem() for
arrays is often constant. In such cases we can optimize array_map_gen_lookup()
into absolute pointer. It will be possible to do
if (idx < max_entries) ptr += idx * elem_size;
during verification instead of runtime and the whole
bpf_map_lookup_elem(map, &key); will become single instruction that
assigns &array[idx] into R0.
Powered by blists - more mailing lists