[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b54b3297-086c-1b64-1c25-01f70c6412af@iogearbox.net>
Date: Mon, 24 Jan 2022 17:21:26 +0100
From: Daniel Borkmann <daniel@...earbox.net>
To: Hou Tao <houtao1@...wei.com>, Alexei Starovoitov <ast@...nel.org>,
Ard Biesheuvel <ard.biesheuvel@....com>
Cc: Martin KaFai Lau <kafai@...com>, Yonghong Song <yhs@...com>,
Andrii Nakryiko <andrii@...nel.org>,
Zi Shen Lim <zlim.lnx@...il.com>,
Will Deacon <will@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
netdev@...r.kernel.org, bpf@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH bpf-next] bpf, arm64: enable kfunc call
On 1/19/22 3:49 PM, Hou Tao wrote:
> Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
> randomization range to 2 GB"), for arm64 whether KASLR is enabled
> or not, the module is placed within 2GB of the kernel region, so
> s32 in bpf_kfunc_desc is sufficient to represente the offset of
> module function relative to __bpf_call_base. The only thing needed
> is to override bpf_jit_supports_kfunc_call().
>
> Signed-off-by: Hou Tao <houtao1@...wei.com>
Lgtm, could we also add a BPF selftest to assert that this assumption
won't break in future when bpf_jit_supports_kfunc_call() returns true?
E.g. extending lib/test_bpf.ko could be an option, wdyt?
> ---
> arch/arm64/net/bpf_jit_comp.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> index e96d4d87291f..74f9a9b6a053 100644
> --- a/arch/arm64/net/bpf_jit_comp.c
> +++ b/arch/arm64/net/bpf_jit_comp.c
> @@ -1143,6 +1143,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> return prog;
> }
>
> +bool bpf_jit_supports_kfunc_call(void)
> +{
> + return true;
> +}
> +
> u64 bpf_jit_alloc_exec_limit(void)
> {
> return VMALLOC_END - VMALLOC_START;
>
Powered by blists - more mailing lists