[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191115032345.loei6qqgyo4tdbuq@ast-mbp.dhcp.thefacebook.com>
Date: Thu, 14 Nov 2019 19:23:46 -0800
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Daniel Borkmann <daniel@...earbox.net>
Cc: ast@...nel.org, john.fastabend@...il.com, netdev@...r.kernel.org,
bpf@...r.kernel.org
Subject: Re: [PATCH rfc bpf-next 7/8] bpf, x86: emit patchable direct jump as
tail call
On Fri, Nov 15, 2019 at 02:04:01AM +0100, Daniel Borkmann wrote:
> for later modifications. In ii) fixup_bpf_tail_call_direct() walks
> over the progs poke_tab, locks the tail call maps poke_mutex to
> prevent from parallel updates and patches in the right locations via
...
> @@ -1610,6 +1671,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> prog->bpf_func = (void *)image;
> prog->jited = 1;
> prog->jited_len = proglen;
> + fixup_bpf_tail_call_direct(prog);
Why not to move fixup_bpf_tail_call_direct() just before
bpf_jit_binary_lock_ro() and use simple memcpy instead of text_poke ?
imo this logic in patch 7:
case BPF_JMP | BPF_TAIL_CALL:
+ if (imm32)
+ emit_bpf_tail_call_direct(&bpf_prog->aux->poke_tab[imm32 - 1],
would have been easier to understand if patch 7 and 8 were swapped.
Powered by blists - more mailing lists