[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56a6a35c-7320-4569-71e3-c4daffee78f3@huawei.com>
Date: Sat, 15 Jul 2023 17:10:26 +0800
From: Pu Lehui <pulehui@...wei.com>
To: Björn Töpel <bjorn@...nel.org>,
Song Shuai <suagrfillet@...il.com>, <paul.walmsley@...ive.com>,
<palmer@...belt.com>, <aou@...s.berkeley.edu>,
<rostedt@...dmis.org>, <mhiramat@...nel.org>,
<mark.rutland@....com>, <guoren@...nel.org>, <bjorn@...osinc.com>,
<jszhang@...nel.org>, <conor.dooley@...rochip.com>,
<palmer@...osinc.com>
CC: <linux-riscv@...ts.infradead.org>, <linux-kernel@...r.kernel.org>,
<linux-trace-kernel@...r.kernel.org>, <songshuaishuai@...ylab.org>,
<bpf@...r.kernel.org>
Subject: Re: [PATCH V11 0/5] riscv: Optimize function trace
On 2023/7/13 2:11, Björn Töpel wrote:
> Song Shuai <suagrfillet@...il.com> writes:
>
> [...]
>
>> Add WITH_DIRECT_CALLS support [3] (patch 3, 4)
>> ==============================================
>
> We've had some offlist discussions, so here's some input for a wider
> audience! Most importantly, this is for Palmer, so that this series is
> not merged until a proper BPF trampoline fix is in place.
>
> Note that what's currently usable from BPF trampoline *works*. It's
> when this series is added that it breaks.
>
> TL;DR This series adds DYNAMIC_FTRACE_WITH_DIRECT_CALLS, which enables
> fentry/fexit BPF trampoline support. Unfortunately the
> fexit/BPF_TRAMP_F_SKIP_FRAME parts of the RV BPF trampoline breaks
> with this addition, and need to be addressed *prior* merging this
> series. An easy way to reproduce, is just calling any of the kselftest
> tests that uses fexit patching.
>
> The issue is around the nop seld, and how a call is done; The nop sled
> (patchable-function-entry) size changed from 16B to 8B in commit
> 6724a76cff85 ("riscv: ftrace: Reduce the detour code size to half"), but
> BPF code still uses the old 16B. So it'll work for BPF programs, but not
> for regular kernel functions.
>
> An example:
>
> | ffffffff80fa4150 <bpf_fentry_test1>:
> | ffffffff80fa4150: 0001 nop
> | ffffffff80fa4152: 0001 nop
> | ffffffff80fa4154: 0001 nop
> | ffffffff80fa4156: 0001 nop
> | ffffffff80fa4158: 1141 add sp,sp,-16
> | ffffffff80fa415a: e422 sd s0,8(sp)
> | ffffffff80fa415c: 0800 add s0,sp,16
> | ffffffff80fa415e: 6422 ld s0,8(sp)
> | ffffffff80fa4160: 2505 addw a0,a0,1
> | ffffffff80fa4162: 0141 add sp,sp,16
> | ffffffff80fa4164: 8082 ret
>
> is patched to:
>
> | ffffffff80fa4150: f70c0297 auipc t0,-150208512
> | ffffffff80fa4154: eb0282e7 jalr t0,t0,-336
>
> The return address to bpf_fentry_test1 is stored in t0 at BPF
> trampoline entry. Return to the *parent* is in ra. The trampline has
> to deal with this.
>
> For BPF_TRAMP_F_SKIP_FRAME/CALL_ORIG, the BPF trampoline will skip too
> many bytes, and not correctly handle parent calls.
>
> Further; The BPF trampoline currently has a different way of patching
> the nops for BPF programs, than what ftrace does. That should be changed
> to match what ftrace does (auipc/jalr t0).
>
> To summarize:
> * Align BPF nop sled with patchable-function-entry: 8B.
> * Adapt BPF trampoline for 8B nop sleds.
> * Adapt BPF trampoline t0 return, ra parent scheme.
>
Thanks Björn, I make a adaptation as follows, looking forward to your
review.
https://lore.kernel.org/bpf/20230715090137.2141358-1-pulehui@huaweicloud.com/
>
> Cheers,
> Björn
>
>
Powered by blists - more mailing lists