[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <X89G2kItO2o60+A6@google.com>
Date: Tue, 8 Dec 2020 09:26:50 +0000
From: Brendan Jackman <jackmanb@...gle.com>
To: John Fastabend <john.fastabend@...il.com>
Cc: bpf@...r.kernel.org, Alexei Starovoitov <ast@...nel.org>,
Yonghong Song <yhs@...com>,
Daniel Borkmann <daniel@...earbox.net>,
KP Singh <kpsingh@...omium.org>,
Florent Revest <revest@...omium.org>,
linux-kernel@...r.kernel.org, Jann Horn <jannh@...gle.com>
Subject: Re: [PATCH bpf-next v4 04/11] bpf: Rename BPF_XADD and prepare to
encode other atomics in .imm
Hi John, thanks a lot for the reviews!
On Mon, Dec 07, 2020 at 01:56:53PM -0800, John Fastabend wrote:
> Brendan Jackman wrote:
> > A subsequent patch will add additional atomic operations. These new
> > operations will use the same opcode field as the existing XADD, with
> > the immediate discriminating different operations.
> >
> > In preparation, rename the instruction mode BPF_ATOMIC and start
> > calling the zero immediate BPF_ADD.
> >
> > This is possible (doesn't break existing valid BPF progs) because the
> > immediate field is currently reserved MBZ and BPF_ADD is zero.
> >
> > All uses are removed from the tree but the BPF_XADD definition is
> > kept around to avoid breaking builds for people including kernel
> > headers.
> >
> > Signed-off-by: Brendan Jackman <jackmanb@...gle.com>
> > ---
> > Documentation/networking/filter.rst | 30 ++++++++-----
> > arch/arm/net/bpf_jit_32.c | 7 ++-
> > arch/arm64/net/bpf_jit_comp.c | 16 +++++--
> > arch/mips/net/ebpf_jit.c | 11 +++--
> > arch/powerpc/net/bpf_jit_comp64.c | 25 ++++++++---
> > arch/riscv/net/bpf_jit_comp32.c | 20 +++++++--
> > arch/riscv/net/bpf_jit_comp64.c | 16 +++++--
> > arch/s390/net/bpf_jit_comp.c | 27 ++++++-----
> > arch/sparc/net/bpf_jit_comp_64.c | 17 +++++--
> > arch/x86/net/bpf_jit_comp.c | 45 ++++++++++++++-----
> > arch/x86/net/bpf_jit_comp32.c | 6 +--
> > drivers/net/ethernet/netronome/nfp/bpf/jit.c | 14 ++++--
> > drivers/net/ethernet/netronome/nfp/bpf/main.h | 4 +-
> > .../net/ethernet/netronome/nfp/bpf/verifier.c | 15 ++++---
> > include/linux/filter.h | 29 ++++++++++--
> > include/uapi/linux/bpf.h | 5 ++-
> > kernel/bpf/core.c | 31 +++++++++----
> > kernel/bpf/disasm.c | 6 ++-
> > kernel/bpf/verifier.c | 24 +++++-----
> > lib/test_bpf.c | 14 +++---
> > samples/bpf/bpf_insn.h | 4 +-
> > samples/bpf/cookie_uid_helper_example.c | 6 +--
> > samples/bpf/sock_example.c | 2 +-
> > samples/bpf/test_cgrp2_attach.c | 5 ++-
> > tools/include/linux/filter.h | 28 ++++++++++--
> > tools/include/uapi/linux/bpf.h | 5 ++-
> > .../bpf/prog_tests/cgroup_attach_multi.c | 4 +-
> > .../selftests/bpf/test_cgroup_storage.c | 2 +-
> > tools/testing/selftests/bpf/verifier/ctx.c | 7 ++-
> > .../bpf/verifier/direct_packet_access.c | 4 +-
> > .../testing/selftests/bpf/verifier/leak_ptr.c | 10 ++---
> > .../selftests/bpf/verifier/meta_access.c | 4 +-
> > tools/testing/selftests/bpf/verifier/unpriv.c | 3 +-
> > .../bpf/verifier/value_illegal_alu.c | 2 +-
> > tools/testing/selftests/bpf/verifier/xadd.c | 18 ++++----
> > 35 files changed, 317 insertions(+), 149 deletions(-)
> >
>
> [...]
>
> > +++ a/arch/mips/net/ebpf_jit.c
>
> [...]
>
> > - if (BPF_MODE(insn->code) == BPF_XADD) {
> > + if (BPF_MODE(insn->code) == BPF_ATOMIC) {
> > + if (insn->imm != BPF_ADD) {
> > + pr_err("ATOMIC OP %02x NOT HANDLED\n", insn->imm);
> > + return -EINVAL;
> > + }
> > +
> > /*
> [...]
> > +++ b/arch/powerpc/net/bpf_jit_comp64.c
>
> > - case BPF_STX | BPF_XADD | BPF_W:
> > + case BPF_STX | BPF_ATOMIC | BPF_W:
> > + if (insn->imm != BPF_ADD) {
> > + pr_err_ratelimited(
> > + "eBPF filter atomic op code %02x (@%d) unsupported\n",
> > + code, i);
> > + return -ENOTSUPP;
> > + }
> [...]
> > @@ -699,8 +707,15 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
> > - case BPF_STX | BPF_XADD | BPF_DW:
> > + case BPF_STX | BPF_ATOMIC | BPF_DW:
> > + if (insn->imm != BPF_ADD) {
> > + pr_err_ratelimited(
> > + "eBPF filter atomic op code %02x (@%d) unsupported\n",
> > + code, i);
> > + return -ENOTSUPP;
> > + }
> [...]
> > + case BPF_STX | BPF_ATOMIC | BPF_W:
> > + if (insn->imm != BPF_ADD) {
> > + pr_info_once(
> > + "bpf-jit: not supported: atomic operation %02x ***\n",
> > + insn->imm);
> > + return -EFAULT;
> > + }
> [...]
> > + case BPF_STX | BPF_ATOMIC | BPF_W:
> > + case BPF_STX | BPF_ATOMIC | BPF_DW:
> > + if (insn->imm != BPF_ADD) {
> > + pr_err("bpf-jit: not supported: atomic operation %02x ***\n",
> > + insn->imm);
> > + return -EINVAL;
> > + }
>
> Can we standardize the error across jits and the error return code? It seems
> odd that we use pr_err, pr_info_once, pr_err_ratelimited and then return
> ENOTSUPP, EFAULT or EINVAL.
That would be a noble cause but I don't think it makes sense in this
patchset: they are already inconsistent, so here I've gone for intra-JIT
consistency over inter-JIT consistency.
I think it would be more annoying, for example, if the s390 JIT returned
-EOPNOTSUPP for a bad atomic but -1 for other unsupported ops, than it
is already that the s390 JIT returns -1 where the MIPS returns -EINVAL.
> granted the error codes might not propagate all the way out at the moment but
> still shouldn't hurt.
>
> > diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
> > index 0a4182792876..f973e2ead197 100644
> > --- a/arch/s390/net/bpf_jit_comp.c
> > +++ b/arch/s390/net/bpf_jit_comp.c
> > @@ -1205,18 +1205,23 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
>
> For example this will return -1 regardless of error from insn->imm != BPF_ADD.
> [...]
> > + case BPF_STX | BPF_ATOMIC | BPF_DW:
> > + case BPF_STX | BPF_ATOMIC | BPF_W:
> > + if (insn->imm != BPF_ADD) {
> > + pr_err("Unknown atomic operation %02x\n", insn->imm);
> > + return -1;
> > + }
> > +
> [...]
>
> > --- a/include/linux/filter.h
> > +++ b/include/linux/filter.h
> > @@ -259,15 +259,38 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
> > .off = OFF, \
> > .imm = 0 })
> >
> > -/* Atomic memory add, *(uint *)(dst_reg + off16) += src_reg */
> > +
> > +/*
> > + * Atomic operations:
> > + *
> > + * BPF_ADD *(uint *) (dst_reg + off16) += src_reg
> > + */
> > +
> > +#define BPF_ATOMIC64(OP, DST, SRC, OFF) \
> > + ((struct bpf_insn) { \
> > + .code = BPF_STX | BPF_DW | BPF_ATOMIC, \
> > + .dst_reg = DST, \
> > + .src_reg = SRC, \
> > + .off = OFF, \
> > + .imm = OP })
> > +
> > +#define BPF_ATOMIC32(OP, DST, SRC, OFF) \
> > + ((struct bpf_insn) { \
> > + .code = BPF_STX | BPF_W | BPF_ATOMIC, \
> > + .dst_reg = DST, \
> > + .src_reg = SRC, \
> > + .off = OFF, \
> > + .imm = OP })
> > +
> > +/* Legacy equivalent of BPF_ATOMIC{64,32}(BPF_ADD, ...) */
>
> Not sure I care too much. Does seem more natural to follow
> below pattern and use,
>
> BPF_ATOMIC(OP, SIZE, DST, SRC, OFF)
>
> >
> > #define BPF_STX_XADD(SIZE, DST, SRC, OFF) \
> > ((struct bpf_insn) { \
> > - .code = BPF_STX | BPF_SIZE(SIZE) | BPF_XADD, \
> > + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> > .dst_reg = DST, \
> > .src_reg = SRC, \
> > .off = OFF, \
> > - .imm = 0 })
> > + .imm = BPF_ADD })
> >
> > /* Memory store, *(uint *) (dst_reg + off16) = imm32 */
> >
>
> [...]
>
> Otherwise LGTM, I'll try to get the remaining patches reviewed tonight
> I need to jump onto something else this afternoon. Thanks!
Powered by blists - more mailing lists