lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 26 Jan 2022 19:10:58 +0800 From: Hou Tao <houtao1@...wei.com> To: Daniel Borkmann <daniel@...earbox.net>, Alexei Starovoitov <ast@...nel.org>, Ard Biesheuvel <ard.biesheuvel@....com> CC: Martin KaFai Lau <kafai@...com>, Yonghong Song <yhs@...com>, Andrii Nakryiko <andrii@...nel.org>, Zi Shen Lim <zlim.lnx@...il.com>, Will Deacon <will@...nel.org>, Catalin Marinas <catalin.marinas@....com>, <netdev@...r.kernel.org>, <bpf@...r.kernel.org>, <linux-arm-kernel@...ts.infradead.org> Subject: Re: [PATCH bpf-next] bpf, arm64: enable kfunc call Hi, On 1/25/2022 12:21 AM, Daniel Borkmann wrote: > On 1/19/22 3:49 PM, Hou Tao wrote: >> Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module >> randomization range to 2 GB"), for arm64 whether KASLR is enabled >> or not, the module is placed within 2GB of the kernel region, so >> s32 in bpf_kfunc_desc is sufficient to represente the offset of >> module function relative to __bpf_call_base. The only thing needed >> is to override bpf_jit_supports_kfunc_call(). >> >> Signed-off-by: Hou Tao <houtao1@...wei.com> > > Lgtm, could we also add a BPF selftest to assert that this assumption > won't break in future when bpf_jit_supports_kfunc_call() returns true? > > E.g. extending lib/test_bpf.ko could be an option, wdyt? Make sense. Will figure out how to done that. Regards, Tao > >> --- >> arch/arm64/net/bpf_jit_comp.c | 5 +++++ >> 1 file changed, 5 insertions(+) >> >> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c >> index e96d4d87291f..74f9a9b6a053 100644 >> --- a/arch/arm64/net/bpf_jit_comp.c >> +++ b/arch/arm64/net/bpf_jit_comp.c >> @@ -1143,6 +1143,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog >> *prog) >> return prog; >> } >> +bool bpf_jit_supports_kfunc_call(void) >> +{ >> + return true; >> +} >> + >> u64 bpf_jit_alloc_exec_limit(void) >> { >> return VMALLOC_END - VMALLOC_START; >> > > .
Powered by blists - more mailing lists