[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220127071532.384888-2-houtao1@huawei.com>
Date: Thu, 27 Jan 2022 15:15:31 +0800
From: Hou Tao <houtao1@...wei.com>
To: Daniel Borkmann <daniel@...earbox.net>
CC: Alexei Starovoitov <ast@...nel.org>,
Martin KaFai Lau <kafai@...com>, Yonghong Song <yhs@...com>,
Andrii Nakryiko <andrii@...nel.org>,
"David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
<netdev@...r.kernel.org>, <bpf@...r.kernel.org>,
<houtao1@...wei.com>, Zi Shen Lim <zlim.lnx@...il.com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Ard Biesheuvel <ard.biesheuvel@....com>,
<linux-arm-kernel@...ts.infradead.org>
Subject: [PATCH bpf-next v2 1/2] bpf, arm64: enable kfunc call
Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
randomization range to 2 GB"), for arm64 whether KASLR is enabled
or not, the module is placed within 2GB of the kernel region, so
s32 in bpf_kfunc_desc is sufficient to represente the offset of
module function relative to __bpf_call_base. The only thing needed
is to override bpf_jit_supports_kfunc_call().
Signed-off-by: Hou Tao <houtao1@...wei.com>
---
arch/arm64/net/bpf_jit_comp.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 6d92f363028c..e60c464004c2 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -1284,6 +1284,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
return prog;
}
+bool bpf_jit_supports_kfunc_call(void)
+{
+ return true;
+}
+
u64 bpf_jit_alloc_exec_limit(void)
{
return VMALLOC_END - VMALLOC_START;
--
2.27.0
Powered by blists - more mailing lists