lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <61f23655411bc_57f032084@john.notmuch>
Date:   Wed, 26 Jan 2022 22:06:13 -0800
From:   John Fastabend <john.fastabend@...il.com>
To:     Hou Tao <houtao1@...wei.com>, Alexei Starovoitov <ast@...nel.org>
Cc:     Martin KaFai Lau <kafai@...com>, Yonghong Song <yhs@...com>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Song Liu <songliubraving@...com>,
        "David S . Miller" <davem@...emloft.net>,
        John Fastabend <john.fastabend@...il.com>,
        netdev@...r.kernel.org, bpf@...r.kernel.org, houtao1@...wei.com,
        Zi Shen Lim <zlim.lnx@...il.com>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>,
        Julien Thierry <jthierry@...hat.com>,
        Mark Rutland <mark.rutland@....com>,
        Ard Biesheuvel <ardb@...nel.org>,
        linux-arm-kernel@...ts.infradead.org
Subject: RE: [PATCH bpf-next 2/2] arm64, bpf: support more atomic operations

Hou Tao wrote:
> Atomics for eBPF patch series adds support for atomic[64]_fetch_add,
> atomic[64]_[fetch_]{and,or,xor} and atomic[64]_{xchg|cmpxchg}, but
> it only add support for x86-64, so support these atomic operations
> for arm64 as well.
> 
> Basically the implementation procedure is almost mechanical translation
> of code snippets in atomic_ll_sc.h & atomic_lse.h & cmpxchg.h located
> under arch/arm64/include/asm. An extra temporary register is needed
> for (BPF_ADD | BPF_FETCH) to save the value of src register, instead of
> adding TMP_REG_4 just use BPF_REG_AX instead.
> 
> For cpus_have_cap(ARM64_HAS_LSE_ATOMICS) case and no-LSE-ATOMICS case,
> both ./test_verifier and "./test_progs -t atomic" are exercised and
> passed correspondingly.
> 
> Signed-off-by: Hou Tao <houtao1@...wei.com>
> ---
>  

[...]

> +static int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx)
> +{
> +	const u8 code = insn->code;
> +	const u8 dst = bpf2a64[insn->dst_reg];
> +	const u8 src = bpf2a64[insn->src_reg];
> +	const u8 tmp = bpf2a64[TMP_REG_1];
> +	const u8 tmp2 = bpf2a64[TMP_REG_2];
> +	const bool isdw = BPF_SIZE(code) == BPF_DW;
> +	const s16 off = insn->off;
> +	u8 reg;
> +
> +	if (!off) {
> +		reg = dst;
> +	} else {
> +		emit_a64_mov_i(1, tmp, off, ctx);
> +		emit(A64_ADD(1, tmp, tmp, dst), ctx);
> +		reg = tmp;
> +	}
> +
> +	switch (insn->imm) {

Diff'ing X86 implementation which has a BPF_SUB case how is it avoided
here?

> +	/* lock *(u32/u64 *)(dst_reg + off) <op>= src_reg */
> +	case BPF_ADD:
> +		emit(A64_STADD(isdw, reg, src), ctx);
> +		break;
> +	case BPF_AND:
> +		emit(A64_MVN(isdw, tmp2, src), ctx);
> +		emit(A64_STCLR(isdw, reg, tmp2), ctx);
> +		break;
> +	case BPF_OR:
> +		emit(A64_STSET(isdw, reg, src), ctx);
> +		break;
> +	case BPF_XOR:
> +		emit(A64_STEOR(isdw, reg, src), ctx);
> +		break;
> +	/* src_reg = atomic_fetch_add(dst_reg + off, src_reg) */
> +	case BPF_ADD | BPF_FETCH:
> +		emit(A64_LDADDAL(isdw, src, reg, src), ctx);
> +		break;
> +	case BPF_AND | BPF_FETCH:
> +		emit(A64_MVN(isdw, tmp2, src), ctx);
> +		emit(A64_LDCLRAL(isdw, src, reg, tmp2), ctx);
> +		break;
> +	case BPF_OR | BPF_FETCH:
> +		emit(A64_LDSETAL(isdw, src, reg, src), ctx);
> +		break;
> +	case BPF_XOR | BPF_FETCH:
> +		emit(A64_LDEORAL(isdw, src, reg, src), ctx);
> +		break;
> +	/* src_reg = atomic_xchg(dst_reg + off, src_reg); */
> +	case BPF_XCHG:
> +		emit(A64_SWPAL(isdw, src, reg, src), ctx);
> +		break;
> +	/* r0 = atomic_cmpxchg(dst_reg + off, r0, src_reg); */
> +	case BPF_CMPXCHG:
> +		emit(A64_CASAL(isdw, src, reg, bpf2a64[BPF_REG_0]), ctx);
> +		break;
> +	default:
> +		pr_err_once("unknown atomic op code %02x\n", insn->imm);
> +		return -EINVAL;

Was about to suggest maybe EFAULT to align with x86, but on second
thought seems arm jit uses EINVAL more universally so best to be
self consistent. Just an observation.

> +	}
> +
> +	return 0;
> +}
> +

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ