[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YN3OEbjzxPgCWN0v@krava>
Date: Thu, 1 Jul 2021 16:15:45 +0200
From: Jiri Olsa <jolsa@...hat.com>
To: "Naveen N. Rao" <naveen.n.rao@...ux.ibm.com>
Cc: Brendan Jackman <jackmanb@...gle.com>,
Sandipan Das <sandipan@...ux.ibm.com>,
Andrii Nakryiko <andrii.nakryiko@...il.com>,
Alexei Starovoitov <ast@...nel.org>, bpf <bpf@...r.kernel.org>,
Daniel Borkmann <daniel@...earbox.net>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...omium.org>,
LKML <linux-kernel@...r.kernel.org>,
Florent Revest <revest@...omium.org>
Subject: Re: [BUG soft lockup] Re: [PATCH bpf-next v3] bpf: Propagate stack
bounds to registers in atomics w/ BPF_FETCH
On Thu, Jul 01, 2021 at 04:32:03PM +0530, Naveen N. Rao wrote:
> Hi Brendan, Hi Jiri,
>
>
> Brendan Jackman wrote:
> > On Wed, 30 Jun 2021 at 14:42, Jiri Olsa <jolsa@...hat.com> wrote:
> > >
> > > On Wed, Jun 30, 2021 at 12:34:58PM +0200, Brendan Jackman wrote:
> > > > On Tue, 29 Jun 2021 at 23:09, Jiri Olsa <jolsa@...hat.com> wrote:
> > > > >
> > > > > On Tue, Jun 29, 2021 at 06:41:24PM +0200, Jiri Olsa wrote:
> > > > > > On Tue, Jun 29, 2021 at 06:25:33PM +0200, Brendan Jackman wrote:
> > > > > > > On Tue, 29 Jun 2021 at 18:04, Jiri Olsa <jolsa@...hat.com> wrote:
> > > > > > > > On Tue, Jun 29, 2021 at 04:10:12PM +0200, Jiri Olsa wrote:
> > > > > > > > > On Mon, Jun 28, 2021 at 11:21:42AM +0200, Brendan Jackman wrote:
> > > >
> > > > > > > > > > atomics in .imm). Any idea if this test was ever passing on PowerPC?
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > hum, I guess not.. will check
> > > > > > > >
> > > > > > > > nope, it locks up the same:
> > > > > > >
> > > > > > > Do you mean it locks up at commit 91c960b0056 too?
> > > >
> > > > Sorry I was being stupid here - the test didn't exist at this commit
> > > >
> > > > > > I tried this one:
> > > > > > 37086bfdc737 bpf: Propagate stack bounds to registers in atomics w/ BPF_FETCH
> > > > > >
> > > > > > I will check also 91c960b0056, but I think it's the new test issue
> > > >
> > > > So yeah hard to say whether this was broken on PowerPC all along. How
> > > > hard is it for me to get set up to reproduce the failure? Is there a
> > > > rootfs I can download, and some instructions for running a PowerPC
> > > > QEMU VM? If so if you can also share your config and I'll take a look.
> > > >
> > > > If it's not as simple as that, I'll stare at the code for a while and
> > > > see if anything jumps out.
> > > >
> > >
> > > I have latest fedora ppc server and compile/install latest bpf-next tree
> > > I think it will be reproduced also on vm, I attached my config
> >
> > OK, getting set up to boot a PowerPC QEMU isn't practical here unless
> > someone's got commands I can copy-paste (suspect it will need .config
> > hacking too). Looks like you need to build a proper bootloader, and
> > boot an installer disk.
>
> There are some notes put up here, though we can do better:
> https://github.com/linuxppc/wiki/wiki/Booting-with-Qemu
>
> If you are familiar with ubuntu/fedora cloud images (and cloud-init), you
> should be able to grab one of the ppc64le images and boot it in qemu:
> https://cloud-images.ubuntu.com/releases/hirsute/release/
> https://alt.fedoraproject.org/alt/
>
> >
> > Looked at the code for a bit but nothing jumped out. It seems like the
> > verifier is seeing a BPF_ADD | BPF_FETCH, which means it doesn't
> > detect an infinite loop, but then we lose the BPF_FETCH flag somewhere
> > between do_check in verifier.c and bpf_jit_build_body in
> > bpf_jit_comp64.c. That would explain why we don't get the "eBPF filter
> > atomic op code %02x (@%d) unsupported", and would also explain the
> > lockup because a normal atomic add without fetch would leave BPF R1
> > unchanged.
> >
> > We should be able to confirm that theory by disassembling the JITted
> > code that gets hexdumped by bpf_jit_dump when bpf_jit_enable is set to
> > 2... at least for PowerPC 32-bit... maybe you could paste those lines
> > into the 64-bit version too? Here's some notes I made for
> > disassembling the hexdump on x86, I guess you'd just need to change
> > the objdump flags:
> >
> > --
> >
> > - Enable console JIT output:
> > ```shell
> > echo 2 > /proc/sys/net/core/bpf_jit_enable
> > ```
> > - Load & run the program of interest.
> > - Copy the hex code from the kernel console to `/tmp/jit.txt`. Here's what a
> > short program looks like. This includes a line of context - don't paste the
> > `flen=` line.
> > ```
> > [ 79.381020] flen=8 proglen=54 pass=4 image=000000001af6f390
> > from=test_verifier pid=258
> > [ 79.389568] JIT code: 00000000: 0f 1f 44 00 00 66 90 55 48 89 e5 48 81 ec 08 00
> > [ 79.397411] JIT code: 00000010: 00 00 48 c7 45 f8 64 00 00 00 bf 04 00 00 00 48
> > [ 79.405965] JIT code: 00000020: f7 df f0 48 29 7d f8 8b 45 f8 48 83 f8 60 74 02
> > [ 79.414719] JIT code: 00000030: c9 c3 31 c0 eb fa
> > ```
> > - This incantation will split out and decode the hex, then disassemble the
> > result:
> > ```shell
> > cat /tmp/jit.txt | cut -d: -f2- | xxd -r >/tmp/obj && objdump -D -b
> > binary -m i386:x86-64 /tmp/obj
> > ```
> >
> > --
> >
> > Sandipan, Naveen, do you know of anything in the PowerPC code that
> > might be leading us to drop the BPF_FETCH flag from the atomic
> > instruction in tools/testing/selftests/bpf/verifier/atomic_bounds.c?
>
> Yes, I think I just found the issue. We aren't looking at the correct BPF
> instruction when checking the IMM value.
great, nice catch! :-) that fixes it for me..
Tested-by: Jiri Olsa <jolsa@...hat.com>
thanks,
jirka
>
>
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -673,7 +673,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
> * BPF_STX ATOMIC (atomic ops)
> */
> case BPF_STX | BPF_ATOMIC | BPF_W:
> - if (insn->imm != BPF_ADD) {
> + if (insn[i].imm != BPF_ADD) {
> pr_err_ratelimited(
> "eBPF filter atomic op code %02x (@%d) unsupported\n",
> code, i);
> @@ -695,7 +695,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
> PPC_BCC_SHORT(COND_NE, tmp_idx);
> break;
> case BPF_STX | BPF_ATOMIC | BPF_DW:
> - if (insn->imm != BPF_ADD) {
> + if (insn[i].imm != BPF_ADD) {
> pr_err_ratelimited(
> "eBPF filter atomic op code %02x (@%d) unsupported\n",
> code, i);
>
>
>
> Thanks,
> Naveen
>
Powered by blists - more mailing lists