[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <mb61pttj1k6nz.fsf@gmail.com>
Date: Mon, 13 May 2024 16:39:28 +0000
From: Puranjay Mohan <puranjay12@...il.com>
To: Maxwell Bland <mbland@...orola.com>, "open list:BPF [GENERAL] (Safe
Dynamic Programs and Tools)" <bpf@...r.kernel.org>
Cc: Catalin Marinas <catalin.marinas@....com>, Will Deacon
<will@...nel.org>, Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann
<daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>, Martin KaFai
Lau <martin.lau@...ux.dev>, Eduard Zingerman <eddyz87@...il.com>, Song Liu
<song@...nel.org>, Yonghong Song <yonghong.song@...ux.dev>, John Fastabend
<john.fastabend@...il.com>, KP Singh <kpsingh@...nel.org>, Stanislav
Fomichev <sdf@...gle.com>, Hao Luo <haoluo@...gle.com>, Jiri Olsa
<jolsa@...nel.org>, Zi Shen Lim <zlim.lnx@...il.com>, Mark Rutland
<mark.rutland@....com>, Suzuki K Poulose <suzuki.poulose@....com>, Mark
Brown <broonie@...nel.org>, linux-arm-kernel@...ts.infradead.org, open
list <linux-kernel@...r.kernel.org>, Josh Poimboeuf <jpoimboe@...nel.org>
Subject: Re: [PATCH bpf-next v4 2/3] arm64/cfi,bpf: Support kCFI + BPF on arm64
Maxwell Bland <mbland@...orola.com> writes:
This patch has a subtle difference from the patch that I sent in v2[1]
Unfortunately, you didn't test this. :(
It will break BPF on an ARM64 kernel compiled with CONFIG_CFI_CLANG=y
See below:
> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> index 76b91f36c729..703247457409 100644
> --- a/arch/arm64/net/bpf_jit_comp.c
> +++ b/arch/arm64/net/bpf_jit_comp.c
> @@ -17,6 +17,7 @@
> #include <asm/asm-extable.h>
> #include <asm/byteorder.h>
> #include <asm/cacheflush.h>
> +#include <asm/cfi.h>
> #include <asm/debug-monitors.h>
> #include <asm/insn.h>
> #include <asm/patching.h>
> @@ -162,6 +163,12 @@ static inline void emit_bti(u32 insn, struct jit_ctx *ctx)
> emit(insn, ctx);
> }
>
> +static inline void emit_kcfi(u32 hash, struct jit_ctx *ctx)
> +{
> + if (IS_ENABLED(CONFIG_CFI_CLANG))
> + emit(hash, ctx);
> +}
> +
> /*
> * Kernel addresses in the vmalloc space use at most 48 bits, and the
> * remaining bits are guaranteed to be 0x1. So we can compose the address
> @@ -337,6 +344,7 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf,
> *
> */
In my original patch the hunk here looked something like:
--- >8 ---
- const int idx0 = ctx->idx;
int cur_offset;
/*
@@ -332,6 +338,8 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf,
*
*/
+ emit_kcfi(is_subprog ? cfi_bpf_subprog_hash : cfi_bpf_hash, ctx);
+ const int idx0 = ctx->idx;
--- 8< ---
moving idx0 = ctx->idx; after emit_kcfi() is important because later
this 'idx0' is used like:
cur_offset = ctx->idx - idx0;
if (cur_offset != PROLOGUE_OFFSET) {
pr_err_once("PROLOGUE_OFFSET = %d, expected %d!\n",
cur_offset, PROLOGUE_OFFSET);
return -1;
}
With the current version, when I boot the kernel I get:
[ 0.499207] bpf_jit: PROLOGUE_OFFSET = 13, expected 12!
and now no BPF program can be JITed!
Please fix this in the next version and test it by running:
/tools/testing/selftests/bpf/test_progs
Pay attention to the `rbtree_success` and the `dummy_st_ops` tests, they
are the important ones for this change.
[1] https://lore.kernel.org/all/20240324211518.93892-2-puranjay12@gmail.com/
Thanks,
Puranjay
Powered by blists - more mailing lists