[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221021203158.4464ac19d8b19b6da6a40852@kernel.org>
Date: Fri, 21 Oct 2022 20:31:58 +0900
From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
To: Florent Revest <revest@...omium.org>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Xu Kuohai <xukuohai@...wei.com>,
Mark Rutland <mark.rutland@....com>,
Catalin Marinas <catalin.marinas@....com>,
Daniel Borkmann <daniel@...earbox.net>,
Xu Kuohai <xukuohai@...weicloud.com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
bpf@...r.kernel.org, Will Deacon <will@...nel.org>,
Jean-Philippe Brucker <jean-philippe@...aro.org>,
Ingo Molnar <mingo@...hat.com>,
Oleg Nesterov <oleg@...hat.com>,
Alexei Starovoitov <ast@...nel.org>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Song Liu <song@...nel.org>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...gle.com>,
Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
Zi Shen Lim <zlim.lnx@...il.com>,
Pasha Tatashin <pasha.tatashin@...een.com>,
Ard Biesheuvel <ardb@...nel.org>,
Marc Zyngier <maz@...nel.org>, Guo Ren <guoren@...nel.org>,
Masami Hiramatsu <mhiramat@...nel.org>
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64
Hi Florent,
On Mon, 17 Oct 2022 19:55:06 +0200
Florent Revest <revest@...omium.org> wrote:
> On Thu, Oct 6, 2022 at 6:29 PM Steven Rostedt <rostedt@...dmis.org> wrote:
> >
> > On Thu, 6 Oct 2022 18:19:12 +0200
> > Florent Revest <revest@...omium.org> wrote:
> >
> > > Sure, we can give this a try, I'll work on a macro that generates the
> > > 7 callbacks and we can check how much that helps. My belief right now
> > > is that ftrace's iteration over all ops on arm64 is where we lose most
> > > time but now that we have numbers it's pretty easy to check hypothesis
> > > :)
> >
> > Ah, I forgot that's what Mark's code is doing. But yes, that needs to be
> > fixed first. I forget that arm64 doesn't have the dedicated trampolines yet.
> >
> > So, let's hold off until that is complete.
> >
> > -- Steve
>
> Mark finished an implementation of his per-callsite-ops and min-args
> branches (meaning that we can now skip the expensive ftrace's saving
> of all registers and iteration over all ops if only one is attached)
> - https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64-ftrace-call-ops-20221017
>
> And Masami wrote similar patches to what I had originally done to
> fprobe in my branch:
> - https://github.com/mhiramat/linux/commits/kprobes/fprobe-update
>
> So I could rebase my previous "bpf on fprobe" branch on top of these:
> (as before, it's just good enough for benchmarking and to give a
> general sense of the idea, not for a thorough code review):
> - https://github.com/FlorentRevest/linux/commits/fprobe-min-args-3
>
> And I could run the benchmarks against my rpi4. I have different
> baseline numbers as Xu so I ran everything again and tried to keep the
> format the same. "indirect call" refers to my branch I just linked and
> "direct call" refers to the series this is a reply to (Xu's work)
Thanks for sharing the measurement results. Yes, fprobes/rethook
implementation is just porting the kretprobes implementation, thus
it may not be so optimized.
BTW, I remember Wuqiang's patch for kretprobes.
https://lore.kernel.org/all/20210830173324.32507-1-wuqiang.matt@bytedance.com/T/#u
This is for the scalability fixing, but may possible to improve
the performance a bit. It is not hard to port to the recent kernel.
Can you try it too?
Anyway, eventually, I would like to remove the current kretprobe
based implementation and unify fexit hook with function-graph
tracer. It should make more better perfromance on it.
Thank you,
>
> 1. test with dd
>
> 1.1 when no bpf prog attached to vfs_write
>
> # dd if=/dev/zero of=/dev/null count=1000000
> 1000000+0 records in
> 1000000+0 records out
> 512000000 bytes (512 MB, 488 MiB) copied, 3.94315 s, 130 MB/s
>
>
> 1.2 attach bpf prog with kprobe, bpftrace -e kprobe:vfs_write {}
>
> # dd if=/dev/zero of=/dev/null count=1000000
> 1000000+0 records in
> 1000000+0 records out
> 512000000 bytes (512 MB, 488 MiB) copied, 5.80493 s, 88.2 MB/s
>
>
> 1.3 attach bpf prog with with direct call, bpftrace -e kfunc:vfs_write {}
>
> # dd if=/dev/zero of=/dev/null count=1000000
> 1000000+0 records in
> 1000000+0 records out
> 512000000 bytes (512 MB, 488 MiB) copied, 4.18579 s, 122 MB/s
>
>
> 1.4 attach bpf prog with with indirect call, bpftrace -e kfunc:vfs_write {}
>
> # dd if=/dev/zero of=/dev/null count=1000000
> 1000000+0 records in
> 1000000+0 records out
> 512000000 bytes (512 MB, 488 MiB) copied, 4.92616 s, 104 MB/s
>
>
> 2. test with bpf/bench
>
> 2.1 bench trig-base
> Iter 0 ( 86.518us): hits 0.700M/s ( 0.700M/prod), drops
> 0.000M/s, total operations 0.700M/s
> Iter 1 (-26.352us): hits 0.701M/s ( 0.701M/prod), drops
> 0.000M/s, total operations 0.701M/s
> Iter 2 ( 1.092us): hits 0.701M/s ( 0.701M/prod), drops
> 0.000M/s, total operations 0.701M/s
> Iter 3 ( -1.890us): hits 0.701M/s ( 0.701M/prod), drops
> 0.000M/s, total operations 0.701M/s
> Iter 4 ( -2.315us): hits 0.701M/s ( 0.701M/prod), drops
> 0.000M/s, total operations 0.701M/s
> Iter 5 ( 4.184us): hits 0.701M/s ( 0.701M/prod), drops
> 0.000M/s, total operations 0.701M/s
> Iter 6 ( -3.241us): hits 0.701M/s ( 0.701M/prod), drops
> 0.000M/s, total operations 0.701M/s
> Summary: hits 0.701 ± 0.000M/s ( 0.701M/prod), drops 0.000 ±
> 0.000M/s, total operations 0.701 ± 0.000M/s
>
> 2.2 bench trig-kprobe
> Iter 0 ( 96.833us): hits 0.290M/s ( 0.290M/prod), drops
> 0.000M/s, total operations 0.290M/s
> Iter 1 (-20.834us): hits 0.291M/s ( 0.291M/prod), drops
> 0.000M/s, total operations 0.291M/s
> Iter 2 ( -2.426us): hits 0.291M/s ( 0.291M/prod), drops
> 0.000M/s, total operations 0.291M/s
> Iter 3 ( 22.332us): hits 0.292M/s ( 0.292M/prod), drops
> 0.000M/s, total operations 0.292M/s
> Iter 4 (-18.204us): hits 0.292M/s ( 0.292M/prod), drops
> 0.000M/s, total operations 0.292M/s
> Iter 5 ( 5.370us): hits 0.292M/s ( 0.292M/prod), drops
> 0.000M/s, total operations 0.292M/s
> Iter 6 ( -7.853us): hits 0.290M/s ( 0.290M/prod), drops
> 0.000M/s, total operations 0.290M/s
> Summary: hits 0.291 ± 0.001M/s ( 0.291M/prod), drops 0.000 ±
> 0.000M/s, total operations 0.291 ± 0.001M/s
>
> 2.3 bench trig-fentry, with direct call
> Iter 0 ( 86.481us): hits 0.530M/s ( 0.530M/prod), drops
> 0.000M/s, total operations 0.530M/s
> Iter 1 (-12.593us): hits 0.536M/s ( 0.536M/prod), drops
> 0.000M/s, total operations 0.536M/s
> Iter 2 ( -5.760us): hits 0.532M/s ( 0.532M/prod), drops
> 0.000M/s, total operations 0.532M/s
> Iter 3 ( 1.629us): hits 0.532M/s ( 0.532M/prod), drops
> 0.000M/s, total operations 0.532M/s
> Iter 4 ( -1.945us): hits 0.533M/s ( 0.533M/prod), drops
> 0.000M/s, total operations 0.533M/s
> Iter 5 ( -1.297us): hits 0.532M/s ( 0.532M/prod), drops
> 0.000M/s, total operations 0.532M/s
> Iter 6 ( 0.444us): hits 0.535M/s ( 0.535M/prod), drops
> 0.000M/s, total operations 0.535M/s
> Summary: hits 0.533 ± 0.002M/s ( 0.533M/prod), drops 0.000 ±
> 0.000M/s, total operations 0.533 ± 0.002M/s
>
> 2.3 bench trig-fentry, with indirect call
> Iter 0 ( 84.463us): hits 0.404M/s ( 0.404M/prod), drops
> 0.000M/s, total operations 0.404M/s
> Iter 1 (-16.260us): hits 0.405M/s ( 0.405M/prod), drops
> 0.000M/s, total operations 0.405M/s
> Iter 2 ( -1.038us): hits 0.405M/s ( 0.405M/prod), drops
> 0.000M/s, total operations 0.405M/s
> Iter 3 ( -3.797us): hits 0.405M/s ( 0.405M/prod), drops
> 0.000M/s, total operations 0.405M/s
> Iter 4 ( -0.537us): hits 0.402M/s ( 0.402M/prod), drops
> 0.000M/s, total operations 0.402M/s
> Iter 5 ( 3.536us): hits 0.403M/s ( 0.403M/prod), drops
> 0.000M/s, total operations 0.403M/s
> Iter 6 ( 12.203us): hits 0.404M/s ( 0.404M/prod), drops
> 0.000M/s, total operations 0.404M/s
> Summary: hits 0.404 ± 0.001M/s ( 0.404M/prod), drops 0.000 ±
> 0.000M/s, total operations 0.404 ± 0.001M/s
>
>
> 3. perf report of bench trig-fentry
>
> 3.1 with direct call
>
> 98.67% 0.27% bench bench
> [.] trigger_producer
> |
> --98.40%--trigger_producer
> |
> |--96.63%--syscall
> | |
> | --71.90%--el0t_64_sync
> | el0t_64_sync_handler
> | el0_svc
> | do_el0_svc
> | |
> | |--70.94%--el0_svc_common
> | | |
> | |
> |--29.55%--invoke_syscall
> | | | |
> | | |
> |--26.23%--__arm64_sys_getpgid
> | | | |
> |
> | | | |
> |--18.88%--bpf_trampoline_6442462665_0
> | | | |
> | |
> | | | |
> | |--6.85%--__bpf_prog_enter
> | | | |
> | | |
> | | | |
> | | --2.68%--migrate_disable
> | | | |
> | |
> | | | |
> | |--5.28%--__bpf_prog_exit
> | | | |
> | | |
> | | | |
> | | --1.29%--migrate_enable
> | | | |
> | |
> | | | |
> |
> |--3.96%--bpf_prog_21856463590f61f1_bench_trigger_fentry
> | | | |
> | |
> | | | |
> | --0.61%--__rcu_read_lock
> | | | |
> |
> | | | |
> --4.42%--find_task_by_vpid
> | | | |
> |
> | | | |
> |--2.53%--radix_tree_lookup
> | | | |
> |
> | | | |
> --0.61%--idr_find
> | | | |
> | | |
> --0.81%--pid_vnr
> | | |
> | |
> --0.53%--__arm64_sys_getpgid
> | |
> | --0.95%--invoke_syscall
> |
> --0.99%--syscall@plt
>
>
> 3.2 with indirect call
>
> 98.68% 0.20% bench bench
> [.] trigger_producer
> |
> --98.48%--trigger_producer
> |
> --97.47%--syscall
> |
> --76.11%--el0t_64_sync
> el0t_64_sync_handler
> el0_svc
> do_el0_svc
> |
> |--75.52%--el0_svc_common
> | |
> |
> |--46.35%--invoke_syscall
> | | |
> | |
> --44.06%--__arm64_sys_getpgid
> | |
> |
> | |
> |--35.40%--ftrace_caller
> | |
> | |
> | |
> | --34.04%--fprobe_handler
> | |
> | |
> | |
> | |--15.61%--bpf_fprobe_entry
> | |
> | | |
> | |
> | | |--3.79%--__bpf_prog_enter
> | |
> | | | |
> | |
> | | |
> --0.80%--migrate_disable
> | |
> | | |
> | |
> | | |--3.74%--__bpf_prog_exit
> | |
> | | | |
> | |
> | | |
> --0.77%--migrate_enable
> | |
> | | |
> | |
> | |
> --2.65%--bpf_prog_21856463590f61f1_bench_trigger_fentry
> | |
> | |
> | |
> | |--12.65%--rethook_trampoline_handler
> | |
> | |
> | |
> | |--1.70%--rethook_try_get
> | |
> | | |
> | |
> | | --1.48%--rcu_is_watching
> | |
> | |
> | |
> | |--1.46%--freelist_try_get
> | |
> | |
> | |
> | --0.65%--rethook_recycle
> | |
> |
> | |
> --6.36%--find_task_by_vpid
> | |
> |
> | |
> |--3.64%--radix_tree_lookup
> | |
> |
> | |
> --1.74%--idr_find
> | |
> | --1.05%--ftrace_caller
> |
> --0.59%--invoke_syscall
>
> This looks slightly better than before but it is actually still a
> pretty significant performance hit compared to direct calls.
>
> Note that I can't really make sense of the perf report with indirect
> calls. it always reports it spent 12% of the time in
> rethook_trampoline_handler but I verified with both a WARN in that
> function and a breakpoint with a debugger, this function does *not*
> get called when running this "bench trig-fentry" benchmark. Also it
> wouldn't make sense for fprobe_handler to call it so I'm quite
> confused why perf would report this call and such a long time spent
> there. Anyone know what I could be missing here ?
--
Masami Hiramatsu (Google) <mhiramat@...nel.org>
Powered by blists - more mailing lists