[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <bbca91f2-d770-af69-8e6d-bfd18c7f1ec1@huawei.com>
Date: Fri, 8 Jul 2022 17:08:04 +0800
From: Xu Kuohai <xukuohai@...wei.com>
To: Jean-Philippe Brucker <jean-philippe@...aro.org>
CC: <bpf@...r.kernel.org>, <linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <netdev@...r.kernel.org>,
Mark Rutland <mark.rutland@....com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Alexei Starovoitov <ast@...nel.org>,
Zi Shen Lim <zlim.lnx@...il.com>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>,
"David S . Miller" <davem@...emloft.net>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
David Ahern <dsahern@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, <x86@...nel.org>,
"H . Peter Anvin" <hpa@...or.com>,
Jakub Kicinski <kuba@...nel.org>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Russell King <rmk+kernel@...linux.org.uk>,
James Morse <james.morse@....com>,
Hou Tao <houtao1@...wei.com>,
Jason Wang <wangborong@...rlc.com>
Subject: Re: [PATCH bpf-next v6 4/4] bpf, arm64: bpf trampoline for arm64
On 7/8/2022 4:24 PM, Jean-Philippe Brucker wrote:
> On Fri, Jul 08, 2022 at 12:35:33PM +0800, Xu Kuohai wrote:
>>>> +
>>>> + emit(A64_ADD_I(1, A64_R(0), A64_SP, args_off), ctx);
>>>> + if (!p->jited)
>>>> + emit_addr_mov_i64(A64_R(1), (const u64)p->insnsi, ctx);
>>>> +
>>>> + emit_call((const u64)p->bpf_func, ctx);
>>>> +
>>>> + /* store return value */
>>>> + if (save_ret)
>>>> + emit(A64_STR64I(r0, A64_SP, retval_off), ctx);
>>>
>>> Here too I think it should be x0. I'm guessing r0 may work for jitted
>>> functions but not interpreted ones
>>>
>>
>> Yes, r0 is only correct for jitted code, will fix it to:
>>
>> if (save_ret)
>> emit(A64_STR64I(p->jited ? r0 : A64_R(0), A64_SP, retval_off),
>> ctx);
>
> I don't think we need this test because x0 should be correct in all cases.
> x7 happens to equal x0 when jitted due to the way build_epilogue() builds
> the function at the moment, but we shouldn't rely on that.
>
>
>>>> + if (flags & BPF_TRAMP_F_CALL_ORIG) {
>>>> + restore_args(ctx, args_off, nargs);
>>>> + /* call original func */
>>>> + emit(A64_LDR64I(A64_R(10), A64_SP, retaddr_off), ctx);
>>>> + emit(A64_BLR(A64_R(10)), ctx);
>>>
>>> I don't think we can do this when BTI is enabled because we're not jumping
>>> to a BTI instruction. We could introduce one in a patched BPF function
>>> (there currently is one if CONFIG_ARM64_PTR_AUTH_KERNEL), but probably not
>>> in a kernel function.
>>>
>>> We could fo like FUNCTION_GRAPH_TRACER does and return to the patched
>>> function after modifying its LR. Not sure whether that works with pointer
>>> auth though.
>>>
>>
>> Yes, the blr instruction should be replaced with ret instruction, thanks!
>>
>> The layout for bpf prog and regular kernel function is as follows, with
>> bti always coming first and paciasp immediately after patchsite, so the
>> ret instruction should work in all cases.
>>
>> bpf prog or kernel function:
>> bti c // if BTI
>> mov x9, lr
>> bl <trampoline> ------> trampoline:
>> ...
>> mov lr, <return_entry>
>> mov x10, <ORIG_CALL_entry>
>> ORIG_CALL_entry: <------- ret x10
>> return_entry:
>> ...
>> paciasp // if PA
>> ...
>
> Actually I just noticed that CONFIG_ARM64_BTI_KERNEL depends on
> CONFIG_ARM64_PTR_AUTH_KERNEL, so we should be able to rely on there always
> being a PACIASP at ORIG_CALL_entry, and since it's a landing pad for BLR
> we don't need to make this a RET
>
> 92e2294d870b ("arm64: bti: Support building kernel C code using BTI")
>
oh, yeah, thanks
> Thanks,
> Jean
>
> .
Powered by blists - more mailing lists