[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <61f35869ba5a_738dc20823@john.notmuch>
Date: Thu, 27 Jan 2022 18:43:53 -0800
From: John Fastabend <john.fastabend@...il.com>
To: Hou Tao <houtao1@...wei.com>,
John Fastabend <john.fastabend@...il.com>,
Alexei Starovoitov <ast@...nel.org>
Cc: Martin KaFai Lau <kafai@...com>, Yonghong Song <yhs@...com>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Song Liu <songliubraving@...com>,
"David S . Miller" <davem@...emloft.net>, netdev@...r.kernel.org,
bpf@...r.kernel.org, Zi Shen Lim <zlim.lnx@...il.com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Julien Thierry <jthierry@...hat.com>,
Mark Rutland <mark.rutland@....com>,
Ard Biesheuvel <ardb@...nel.org>,
linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH bpf-next 2/2] arm64, bpf: support more atomic operations
Hou Tao wrote:
> Hi,
>
> On 1/27/2022 2:06 PM, John Fastabend wrote:
> > Hou Tao wrote:
> >> Atomics for eBPF patch series adds support for atomic[64]_fetch_add,
> >> atomic[64]_[fetch_]{and,or,xor} and atomic[64]_{xchg|cmpxchg}, but
> >> it only add support for x86-64, so support these atomic operations
> >> for arm64 as well.
> >>
> >> +static int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx)
> >> +{
> >> + const u8 code = insn->code;
> >> + const u8 dst = bpf2a64[insn->dst_reg];
> >> + const u8 src = bpf2a64[insn->src_reg];
> >> + const u8 tmp = bpf2a64[TMP_REG_1];
> >> + const u8 tmp2 = bpf2a64[TMP_REG_2];
> >> + const bool isdw = BPF_SIZE(code) == BPF_DW;
> >> + const s16 off = insn->off;
> >> + u8 reg;
> >> +
> >> + if (!off) {
> >> + reg = dst;
> >> + } else {
> >> + emit_a64_mov_i(1, tmp, off, ctx);
> >> + emit(A64_ADD(1, tmp, tmp, dst), ctx);
> >> + reg = tmp;
> >> + }
> >> +
> >> + switch (insn->imm) {
> > Diff'ing X86 implementation which has a BPF_SUB case how is it avoided
> > here?
> I think it is just left over from patchset [1], because according to the LLVM
> commit [2]
> __sync_fetch_and_sub(&addr, value) is implemented by __sync_fetch_and_add(&addr,
> -value).
> I will post a patch to remove it.
OK in that case LGTM with the caveat not an ARM expert.
Acked-by: John Fastabend <john.fastabend@...il.com>
[...]
> >> + default:
> >> + pr_err_once("unknown atomic op code %02x\n", insn->imm);
> >> + return -EINVAL;
> > Was about to suggest maybe EFAULT to align with x86, but on second
> > thought seems arm jit uses EINVAL more universally so best to be
> > self consistent. Just an observation.
> OK. So I will still return -EINVAL for invalid atomic operation.
Sounds good to me.
> >
> >> + }
> >> +
> >> + return 0;
> >> +}
> >> +
> > .
>
Powered by blists - more mailing lists