[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <B6B907F5-E6CA-4C0B-92F3-0411CA4F4D95@fb.com>
Date: Wed, 31 Jul 2019 19:35:27 +0000
From: Song Liu <songliubraving@...com>
To: Alexei Starovoitov <ast@...nel.org>
CC: "David S. Miller" <davem@...emloft.net>,
Daniel Borkmann <daniel@...earbox.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"bpf@...r.kernel.org" <bpf@...r.kernel.org>,
Kernel Team <Kernel-team@...com>
Subject: Re: [PATCH bpf 1/2] bpf: fix x64 JIT code generation for jmp to 1st
insn
> On Jul 30, 2019, at 6:38 PM, Alexei Starovoitov <ast@...nel.org> wrote:
>
> Introduction of bounded loops exposed old bug in x64 JIT.
> JIT maintains the array of offsets to the end of all instructions to
> compute jmp offsets.
> addrs[0] - offset of the end of the 1st insn (that includes prologue).
> addrs[1] - offset of the end of the 2nd insn.
> JIT didn't keep the offset of the beginning of the 1st insn,
> since classic BPF didn't have backward jumps and valid extended BPF
> couldn't have a branch to 1st insn, because it didn't allow loops.
> With bounded loops it's possible to construct a valid program that
> jumps backwards to the 1st insn.
> Fix JIT by computing:
> addrs[0] - offset of the end of prologue == start of the 1st insn.
> addrs[1] - offset of the end of 1st insn.
>
> Reported-by: syzbot+35101610ff3e83119b1b@...kaller.appspotmail.com
> Fixes: 2589726d12a1 ("bpf: introduce bounded loops")
> Fixes: 0a14842f5a3c ("net: filter: Just In Time compiler for x86-64")
> Signed-off-by: Alexei Starovoitov <ast@...nel.org>
Acked-by: Song Liu <songliubraving@...com>
Do we need similar fix for x86_32?
Thanks,
Song
> ---
> arch/x86/net/bpf_jit_comp.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index eaaed5bfc4a4..a56c95805732 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -390,8 +390,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
>
> emit_prologue(&prog, bpf_prog->aux->stack_depth,
> bpf_prog_was_classic(bpf_prog));
> + addrs[0] = prog - temp;
>
> - for (i = 0; i < insn_cnt; i++, insn++) {
> + for (i = 1; i <= insn_cnt; i++, insn++) {
> const s32 imm32 = insn->imm;
> u32 dst_reg = insn->dst_reg;
> u32 src_reg = insn->src_reg;
> @@ -1105,7 +1106,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> extra_pass = true;
> goto skip_init_addrs;
> }
> - addrs = kmalloc_array(prog->len, sizeof(*addrs), GFP_KERNEL);
> + addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL);
> if (!addrs) {
> prog = orig_prog;
> goto out_addrs;
> @@ -1115,7 +1116,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> * Before first pass, make a rough estimation of addrs[]
> * each BPF instruction is translated to less than 64 bytes
> */
> - for (proglen = 0, i = 0; i < prog->len; i++) {
> + for (proglen = 0, i = 0; i <= prog->len; i++) {
> proglen += 64;
> addrs[i] = proglen;
> }
> --
> 2.20.0
>
Powered by blists - more mailing lists