[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <29f1e58e-1ecf-e191-f60f-c82eb8a7e76c@intel.com>
Date: Thu, 25 Feb 2021 07:39:59 +0100
From: Björn Töpel <bjorn.topel@...el.com>
To: Daniel Borkmann <daniel@...earbox.net>,
Björn Töpel <bjorn.topel@...il.com>,
ast@...nel.org, netdev@...r.kernel.org, bpf@...r.kernel.org
Cc: maciej.fijalkowski@...el.com, hawk@...nel.org, toke@...hat.com,
magnus.karlsson@...el.com, john.fastabend@...il.com,
kuba@...nel.org, davem@...emloft.net
Subject: Re: [PATCH bpf-next v3 1/2] bpf, xdp: per-map bpf_redirect_map
functions for XDP
On 2021-02-25 00:38, Daniel Borkmann wrote:
> On 2/21/21 9:09 PM, Björn Töpel wrote:
>> From: Björn Töpel <bjorn.topel@...el.com>
>>
>> Currently the bpf_redirect_map() implementation dispatches to the
>> correct map-lookup function via a switch-statement. To avoid the
>> dispatching, this change adds one bpf_redirect_map() implementation per
>> map. Correct function is automatically selected by the BPF verifier.
>>
>> v2->v3 : Fix build when CONFIG_NET is not set. (lkp)
>> v1->v2 : Re-added comment. (Toke)
>> rfc->v1: Get rid of the macro and use __always_inline. (Jesper)
>>
>> Acked-by: Toke Høiland-Jørgensen <toke@...hat.com>
>> Signed-off-by: Björn Töpel <bjorn.topel@...el.com>
>
> [...]
>
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 3d34ba492d46..89ccc10c6348 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -5409,7 +5409,8 @@ record_func_map(struct bpf_verifier_env *env,
>> struct bpf_call_arg_meta *meta,
>> func_id != BPF_FUNC_map_delete_elem &&
>> func_id != BPF_FUNC_map_push_elem &&
>> func_id != BPF_FUNC_map_pop_elem &&
>> - func_id != BPF_FUNC_map_peek_elem)
>> + func_id != BPF_FUNC_map_peek_elem &&
>> + func_id != BPF_FUNC_redirect_map)
>> return 0;
>> if (map == NULL) {
>> @@ -11545,12 +11546,12 @@ static int fixup_bpf_calls(struct
>> bpf_verifier_env *env)
>> struct bpf_prog *prog = env->prog;
>> bool expect_blinding = bpf_jit_blinding_enabled(prog);
>> struct bpf_insn *insn = prog->insnsi;
>> - const struct bpf_func_proto *fn;
>> const int insn_cnt = prog->len;
>> const struct bpf_map_ops *ops;
>> struct bpf_insn_aux_data *aux;
>> struct bpf_insn insn_buf[16];
>> struct bpf_prog *new_prog;
>> + bpf_func_proto_func func;
>> struct bpf_map *map_ptr;
>> int i, ret, cnt, delta = 0;
>> @@ -11860,17 +11861,23 @@ static int fixup_bpf_calls(struct
>> bpf_verifier_env *env)
>> }
>> patch_call_imm:
>> - fn = env->ops->get_func_proto(insn->imm, env->prog);
>> + if (insn->imm == BPF_FUNC_redirect_map) {
>> + aux = &env->insn_aux_data[i];
>> + map_ptr = BPF_MAP_PTR(aux->map_ptr_state);
>> + func = get_xdp_redirect_func(map_ptr->map_type);
>
> Nope, this is broken. :/ The map_ptr could be poisoned, so
> unconditionally fetching
> map_ptr->map_type can crash the box for specially crafted BPF progs.
>
Thanks for explaining, Daniel! I'll address that!
> Also, given you add the related BPF_CALL_3() functions below, what is
> the reason
> to not properly integrate this like the map ops near patch_map_ops_generic?
>
...and will have look how the map-patching works!
Cheers,
Björn
[...]
Powered by blists - more mailing lists