[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <877d5gwrxh.fsf@cloudflare.com>
Date: Thu, 16 Jun 2022 17:24:50 +0200
From: Jakub Sitnicki <jakub@...udflare.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>, kernel-team@...udflare.com
Subject: Re: [RFC bpf] selftests/bpf: Curious case of a successful tailcall
that returns to caller
On Thu, Jun 16, 2022 at 05:00 PM +02, Maciej Fijalkowski wrote:
> On Thu, Jun 16, 2022 at 01:02:52PM +0200, Jakub Sitnicki wrote:
>> While working aarch64 JIT to allow mixing bpf2bpf calls with tailcalls, I
>> noticed unexpected tailcall behavior in x86 JIT.
>>
>> I don't know if it is by design or a bug. The bpf_tail_call helper
>> documentation says that the user should not expect the control flow to
>> return to the previous program, if the tail call was successful:
>>
>> > If the call succeeds, the kernel immediately runs the first
>> > instruction of the new program. This is not a function call,
>> > and it never returns to the previous program.
>>
>> However, when a tailcall happens from a subprogram, that is after a bpf2bpf
>> call, that is not the case. We return to the caller program because the
>> stack destruction is too shallow. BPF stack of just the top-most BPF
>> function gets destroyed.
>>
>> This in turn allows the return value of the tailcall'ed program to get
>> overwritten, as the test below test demonstrates. It currently fails on
>> x86:
>
> Disclaimer: some time has passed by since I looked into this :P
>
> To me the bug would be if test would have returned 1 in your case. If I
> recall correctly that was the design choice, so tailcalls when mixed with
> bpf2bpf will consume current stack frame. When tailcall happens from
> subprogram then we would return to the caller of this subprog. We added
> logic to verifier that checks if this (tc + bpf2bpf) mix wouldn't cause
> stack overflow. We even limit the stack frame size to 256 in such case.
>
> Cilium docs explain this:
> https://docs.cilium.io/en/latest/bpf/#bpf-to-bpf-calls
Thanks for such a quick response.
This answers my question. I should have looked in Cilium docs.
I will see how to work this bit of info into the helper docs.
[...]
Powered by blists - more mailing lists