lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e329fdce-8c2f-4bc2-88e1-b079ec382eef@huaweicloud.com>
Date: Thu, 20 Jun 2024 10:41:29 +0800
From: Xu Kuohai <xukuohai@...weicloud.com>
To: Puranjay Mohan <puranjay@...nel.org>, Alexei Starovoitov
 <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
 Andrii Nakryiko <andrii@...nel.org>, Martin KaFai Lau
 <martin.lau@...ux.dev>, Eduard Zingerman <eddyz87@...il.com>,
 Song Liu <song@...nel.org>, Yonghong Song <yonghong.song@...ux.dev>,
 John Fastabend <john.fastabend@...il.com>, KP Singh <kpsingh@...nel.org>,
 Stanislav Fomichev <sdf@...gle.com>, Hao Luo <haoluo@...gle.com>,
 Jiri Olsa <jolsa@...nel.org>, Catalin Marinas <catalin.marinas@....com>,
 Will Deacon <will@...nel.org>, bpf@...r.kernel.org,
 linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Cc: puranjay12@...il.com
Subject: Re: [PATCH] bpf, arm64: inline bpf_get_current_task/_btf() helpers

On 6/19/2024 9:13 PM, Puranjay Mohan wrote:
> On ARM64, the pointer to task_struct is always available in the sp_el0
> register and therefore the calls to bpf_get_current_task() and
> bpf_get_current_task_btf() can be inlined into a single MRS instruction.
> 
> Here is the difference before and after this change:
> 
> Before:
> 
> ; struct task_struct *task = bpf_get_current_task_btf();
>    54:   mov     x10, #0xffffffffffff7978        // #-34440
>    58:   movk    x10, #0x802b, lsl #16
>    5c:   movk    x10, #0x8000, lsl #32
>    60:   blr     x10          -------------->    0xffff8000802b7978 <+0>:     mrs     x0, sp_el0
>    64:   add     x7, x0, #0x0 <--------------    0xffff8000802b797c <+4>:     ret
> 
> After:
> 
> ; struct task_struct *task = bpf_get_current_task_btf();
>    54:   mrs     x7, sp_el0
> 
> This shows around 1% performance improvement in artificial microbenchmark.
>

I think it would be better if more detailed data could be provided.

> Signed-off-by: Puranjay Mohan <puranjay@...nel.org>
> ---
>   arch/arm64/net/bpf_jit_comp.c | 9 +++++++++
>   1 file changed, 9 insertions(+)
> 
> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> index 720336d28856..b838dab3bd26 100644
> --- a/arch/arm64/net/bpf_jit_comp.c
> +++ b/arch/arm64/net/bpf_jit_comp.c
> @@ -1244,6 +1244,13 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
>   			break;
>   		}
>   
> +		/* Implement helper call to bpf_get_current_task/_btf() inline */
> +		if (insn->src_reg == 0 && (insn->imm == BPF_FUNC_get_current_task ||
> +					   insn->imm == BPF_FUNC_get_current_task_btf)) {
> +			emit(A64_MRS_SP_EL0(r0), ctx);
> +			break;
> +		}
> +
>   		ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass,
>   					    &func_addr, &func_addr_fixed);
>   		if (ret < 0)
> @@ -2581,6 +2588,8 @@ bool bpf_jit_inlines_helper_call(s32 imm)
>   {
>   	switch (imm) {
>   	case BPF_FUNC_get_smp_processor_id:
> +	case BPF_FUNC_get_current_task:
> +	case BPF_FUNC_get_current_task_btf:
>   		return true;
>   	default:
>   		return false;

Acked-by: Xu Kuohai <xukuohai@...wei.com>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ