[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAADnVQKrUDGjOi2_SSs=qTxpsXcgM-zdTY3D5VwOcewWQcRTjA@mail.gmail.com>
Date: Thu, 1 Aug 2019 13:44:18 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Song Liu <songliubraving@...com>
Cc: Alexei Starovoitov <ast@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Daniel Borkmann <daniel@...earbox.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"bpf@...r.kernel.org" <bpf@...r.kernel.org>,
Kernel Team <Kernel-team@...com>
Subject: Re: [PATCH bpf 1/2] bpf: fix x64 JIT code generation for jmp to 1st insn
On Wed, Jul 31, 2019 at 8:43 PM Alexei Starovoitov
<alexei.starovoitov@...il.com> wrote:
>
> On Wed, Jul 31, 2019 at 12:36 PM Song Liu <songliubraving@...com> wrote:
> >
> >
> >
> > > On Jul 30, 2019, at 6:38 PM, Alexei Starovoitov <ast@...nel.org> wrote:
> > >
> > > Introduction of bounded loops exposed old bug in x64 JIT.
> > > JIT maintains the array of offsets to the end of all instructions to
> > > compute jmp offsets.
> > > addrs[0] - offset of the end of the 1st insn (that includes prologue).
> > > addrs[1] - offset of the end of the 2nd insn.
> > > JIT didn't keep the offset of the beginning of the 1st insn,
> > > since classic BPF didn't have backward jumps and valid extended BPF
> > > couldn't have a branch to 1st insn, because it didn't allow loops.
> > > With bounded loops it's possible to construct a valid program that
> > > jumps backwards to the 1st insn.
> > > Fix JIT by computing:
> > > addrs[0] - offset of the end of prologue == start of the 1st insn.
> > > addrs[1] - offset of the end of 1st insn.
> > >
> > > Reported-by: syzbot+35101610ff3e83119b1b@...kaller.appspotmail.com
> > > Fixes: 2589726d12a1 ("bpf: introduce bounded loops")
> > > Fixes: 0a14842f5a3c ("net: filter: Just In Time compiler for x86-64")
> > > Signed-off-by: Alexei Starovoitov <ast@...nel.org>
> >
> > Acked-by: Song Liu <songliubraving@...com>
> >
> > Do we need similar fix for x86_32?
>
> Right. x86_32 would need similar fix.
>
> Applied to bpf tree.
Yonghong noticed that it subtly changes jited linfo.
Surprisingly perf annotated output for source code in jited bpf progs
looks exactly the same for several large bpf progs that I've looked at.
This to be investigated later.
I've applied the fix:
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index a56c95805732..991549a1c5f3 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1181,7 +1181,7 @@ struct bpf_prog *bpf_int_jit_compile(struct
bpf_prog *prog)
if (!image || !prog->is_func || extra_pass) {
if (image)
- bpf_prog_fill_jited_linfo(prog, addrs);
+ bpf_prog_fill_jited_linfo(prog, addrs + 1);
out_addrs:
kfree(addrs);
kfree(jit_data);
and re-pushed bpf tree.
The new commit is here:
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git/commit/?id=7c2e988f400e83501e0a3568250780609b7c8263
Thanks Yonghong!
For bpf-next we need to figure out how to make test_btf more robust.
We can probably check first few insns for specific jited offsets,
but I don't yet see how to make it work for all archs.
And it will be annoying to keep it working with every change to jit.
Powered by blists - more mailing lists