[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f333e28c-a4ff-62eb-4b75-ee301e5ea53f@csgroup.eu>
Date: Tue, 22 Nov 2022 07:54:18 +0000
From: Christophe Leroy <christophe.leroy@...roup.eu>
To: "Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>,
Nicholas Piggin <npiggin@...il.com>
CC: Andrii Nakryiko <andrii@...nel.org>,
Alexei Starovoitov <ast@...nel.org>,
"bpf@...r.kernel.org" <bpf@...r.kernel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Hao Luo <haoluo@...gle.com>,
John Fastabend <john.fastabend@...il.com>,
Jiri Olsa <jolsa@...nel.org>, KP Singh <kpsingh@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linuxppc-dev@...ts.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Stanislav Fomichev <sdf@...gle.com>,
Song Liu <song@...nel.org>,
"stable@...r.kernel.org" <stable@...r.kernel.org>,
Yonghong Song <yhs@...com>
Subject: Re: [PATCH] powerpc/bpf/32: Fix Oops on tail call tests
Le 22/11/2022 à 08:33, Naveen N. Rao a écrit :
> Christophe Leroy wrote:
>> test_bpf tail call tests end up as:
>>
>> test_bpf: #0 Tail call leaf jited:1 85 PASS
>> test_bpf: #1 Tail call 2 jited:1 111 PASS
>> test_bpf: #2 Tail call 3 jited:1 145 PASS
>> test_bpf: #3 Tail call 4 jited:1 170 PASS
>> test_bpf: #4 Tail call load/store leaf jited:1 190 PASS
>> test_bpf: #5 Tail call load/store jited:1
>> BUG: Unable to handle kernel data access on write at 0xf1b4e000
>> Faulting instruction address: 0xbe86b710
>> Oops: Kernel access of bad area, sig: 11 [#1]
>> BE PAGE_SIZE=4K MMU=Hash PowerMac
>> Modules linked in: test_bpf(+)
>> CPU: 0 PID: 97 Comm: insmod Not tainted 6.1.0-rc4+ #195
>> Hardware name: PowerMac3,1 750CL 0x87210 PowerMac
>> NIP: be86b710 LR: be857e88 CTR: be86b704
>> REGS: f1b4df20 TRAP: 0300 Not tainted (6.1.0-rc4+)
>> MSR: 00009032 <EE,ME,IR,DR,RI> CR: 28008242 XER: 00000000
>> DAR: f1b4e000 DSISR: 42000000
>> GPR00: 00000001 f1b4dfe0 c11d2280 00000000 00000000 00000000
>> 00000002 00000000
>> GPR08: f1b4e000 be86b704 f1b4e000 00000000 00000000 100d816a
>> f2440000 fe73baa8
>> GPR16: f2458000 00000000 c1941ae4 f1fe2248 00000045 c0de0000
>> f2458030 00000000
>> GPR24: 000003e8 0000000f f2458000 f1b4dc90 3e584b46 00000000
>> f24466a0 c1941a00
>> NIP [be86b710] 0xbe86b710
>> LR [be857e88] __run_one+0xec/0x264 [test_bpf]
>> Call Trace:
>> [f1b4dfe0] [00000002] 0x2 (unreliable)
>> Instruction dump:
>> XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX
>> XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX
>> ---[ end trace 0000000000000000 ]---
>>
>> This is a tentative to write above the stack. The problem is encoutered
>> with tests added by commit 38608ee7b690 ("bpf, tests: Add load store
>> test case for tail call")
>>
>> This happens because tail call is done to a BPF prog with a different
>> stack_depth. At the time being, the stack is kept as is when the caller
>> tail calls its callee. But at exit, the callee restores the stack based
>> on its own properties. Therefore here, at each run, r1 is erroneously
>> increased by 32 - 16 = 16 bytes.
>>
>> This was done that way in order to pass the tail call count from caller
>> to callee through the stack. As powerpc32 doesn't have a red zone in
>> the stack, it was necessary the maintain the stack as is for the tail
>> call. But it was not anticipated that the BPF frame size could be
>> different.
>>
>> Let's take a new approach. Use register r0 to carry the tail call count
>> during the tail call, and save it into the stack at function entry if
>> required. That's a deviation from the ppc32 ABI, but after all the way
>> tail calls are implemented is already not in accordance with the ABI.
>
> Can we pass the tail call count in r4 instead?
It's a bit tricky.
When entering the function through the normal entry point, the input
parameter is a 32 bits pointer and is in r3.
But at the begining of the function it gets moved to r4 and r3 is
cleared because it becomes a 64 bits parameter.
When using the tailcall entry point, it is already in r4, and until now
r3 was containing garbage, with this patch r3 gets cleared as well.
We could move the input pointer back into r3 for the tailcall as well,
but it would mean unnecessary register move.
Or we can use r3 for the tailcall counter.
Or I should make r3,r4 a proper 64 bits param (Meaning clearing r3
before the call instead of doing it at function tailcall entry), and use
r5 for the tailcall counter. Maybe that's the cleanest.
>
>>
>> With the fix, tail call tests are now successfull:
>>
>> test_bpf: #0 Tail call leaf jited:1 53 PASS
>> test_bpf: #1 Tail call 2 jited:1 115 PASS
>> test_bpf: #2 Tail call 3 jited:1 154 PASS
>> test_bpf: #3 Tail call 4 jited:1 165 PASS
>> test_bpf: #4 Tail call load/store leaf jited:1 101 PASS
>> test_bpf: #5 Tail call load/store jited:1 141 PASS
>> test_bpf: #6 Tail call error path, max count reached jited:1 994 PASS
>> test_bpf: #7 Tail call count preserved across function calls jited:1
>> 140975 PASS
>> test_bpf: #8 Tail call error path, NULL target jited:1 110 PASS
>> test_bpf: #9 Tail call error path, index out of range jited:1 69 PASS
>> test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed]
>>
>> Fixes: 51c66ad849a7 ("powerpc/bpf: Implement extended BPF on PPC32")
>> Cc: stable@...r.kernel.org
>> Signed-off-by: Christophe Leroy <christophe.leroy@...roup.eu>
>> ---
>> arch/powerpc/net/bpf_jit_comp32.c | 25 +++++++++++--------------
>> 1 file changed, 11 insertions(+), 14 deletions(-)
>>
>> diff --git a/arch/powerpc/net/bpf_jit_comp32.c
>> b/arch/powerpc/net/bpf_jit_comp32.c
>> index 43f1c76d48ce..97e75b8181ca 100644
>> --- a/arch/powerpc/net/bpf_jit_comp32.c
>> +++ b/arch/powerpc/net/bpf_jit_comp32.c
>> @@ -115,21 +115,19 @@ void bpf_jit_build_prologue(u32 *image, struct
>> codegen_context *ctx)
>>
>> /* First arg comes in as a 32 bits pointer. */
>> EMIT(PPC_RAW_MR(bpf_to_ppc(BPF_REG_1), _R3));
>> - EMIT(PPC_RAW_LI(bpf_to_ppc(BPF_REG_1) - 1, 0));
>> + EMIT(PPC_RAW_LI(_R0, 0));
>> +
>> +#define BPF_TAILCALL_PROLOGUE_SIZE 8
>> +
>> EMIT(PPC_RAW_STWU(_R1, _R1, -BPF_PPC_STACKFRAME(ctx)));
>>
>> /*
>> - * Initialize tail_call_cnt in stack frame if we do tail calls.
>> - * Otherwise, put in NOPs so that it can be skipped when we are
>> - * invoked through a tail call.
>> + * Save tail_call_cnt in stack frame if we do tail calls.
>> */
>> if (ctx->seen & SEEN_TAILCALL)
>> - EMIT(PPC_RAW_STW(bpf_to_ppc(BPF_REG_1) - 1, _R1,
>> - bpf_jit_stack_offsetof(ctx, BPF_PPC_TC)));
>> - else
>> - EMIT(PPC_RAW_NOP());
>> + EMIT(PPC_RAW_STW(_R0, _R1, bpf_jit_stack_offsetof(ctx,
>> BPF_PPC_TC)));
>>
>> -#define BPF_TAILCALL_PROLOGUE_SIZE 16
>> + EMIT(PPC_RAW_LI(bpf_to_ppc(BPF_REG_1) - 1, 0));
>>
>> /*
>> * We need a stack frame, but we don't necessarily need to
>> @@ -244,7 +242,6 @@ static int bpf_jit_emit_tail_call(u32 *image,
>> struct codegen_context *ctx, u32 o
>> EMIT(PPC_RAW_RLWINM(_R3, b2p_index, 2, 0, 29));
>> EMIT(PPC_RAW_ADD(_R3, _R3, b2p_bpf_array));
>> EMIT(PPC_RAW_LWZ(_R3, _R3, offsetof(struct bpf_array, ptrs)));
>> - EMIT(PPC_RAW_STW(_R0, _R1, bpf_jit_stack_offsetof(ctx,
>> BPF_PPC_TC)));
>>
>> /*
>> * if (prog == NULL)
>> @@ -257,20 +254,20 @@ static int bpf_jit_emit_tail_call(u32 *image,
>> struct codegen_context *ctx, u32 o
>> EMIT(PPC_RAW_LWZ(_R3, _R3, offsetof(struct bpf_prog, bpf_func)));
>>
>> if (ctx->seen & SEEN_FUNC)
>> - EMIT(PPC_RAW_LWZ(_R0, _R1, BPF_PPC_STACKFRAME(ctx) +
>> PPC_LR_STKOFF));
>> + EMIT(PPC_RAW_LWZ(_R5, _R1, BPF_PPC_STACKFRAME(ctx) +
>> PPC_LR_STKOFF));
>>
>> EMIT(PPC_RAW_ADDIC(_R3, _R3, BPF_TAILCALL_PROLOGUE_SIZE));
>>
>> if (ctx->seen & SEEN_FUNC)
>> - EMIT(PPC_RAW_MTLR(_R0));
>> + EMIT(PPC_RAW_MTLR(_R5));
>
> Should we explicitly zero-out _R5 after this?
Don't know, is that required ?
By the way if I start using _R5 instead of _R0 for TCC then this won't
change.
>
> You can move the above PPC_RAW_LWZ() and PPC_RAW_MTLR() instructions, as
> well as the ADDI below for r1 into bpf_jit_emit_common_epilogue() and
> not have to repeat those here.
Right. Allthough I wanted to minimise the churn. But yes I can do that
especially as we'll now use _R5 for TCC and keep _R0 for mtlr.
Christophe
>
> - Naveen
>
>>
>> EMIT(PPC_RAW_MTCTR(_R3));
>>
>> - EMIT(PPC_RAW_MR(_R3, bpf_to_ppc(BPF_REG_1)));
>> -
>> /* tear restore NVRs, ... */
>> bpf_jit_emit_common_epilogue(image, ctx);
>>
>> + EMIT(PPC_RAW_ADDI(_R1, _R1, BPF_PPC_STACKFRAME(ctx)));
>> +
>> EMIT(PPC_RAW_BCTR());
>>
>> /* out: */
>> --
>> 2.38.1
>>
>>
Powered by blists - more mailing lists