[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z-JX5ImltdTFoFgr@gmail.com>
Date: Tue, 25 Mar 2025 08:14:44 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Eric Dumazet <edumazet@...gle.com>
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H . Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Masami Hiramatsu <mhiramat@...nel.org>, x86@...nel.org,
bpf@...r.kernel.org, Eric Dumazet <eric.dumazet@...il.com>,
Greg Thelen <gthelen@...gle.com>,
Stephane Eranian <eranian@...gle.com>
Subject: Re: [PATCH v3] x86/alternatives: remove false sharing in
poke_int3_handler()
* Eric Dumazet <edumazet@...gle.com> wrote:
> eBPF programs can be run 50,000,000 times per second on busy servers.
>
> Whenever /proc/sys/kernel/bpf_stats_enabled is turned off,
> hundreds of calls sites are patched from text_poke_bp_batch()
> and we see a huge loss of performance due to false sharing
> on bp_desc.refs lasting up to three seconds.
>
> 51.30% server_bin [kernel.kallsyms] [k] poke_int3_handler
> |
> |--46.45%--poke_int3_handler
> | exc_int3
> | asm_exc_int3
> | |
> | |--24.26%--cls_bpf_classify
> | | tcf_classify
> | | __dev_queue_xmit
> | | ip6_finish_output2
> | | ip6_output
> | | ip6_xmit
> | | inet6_csk_xmit
> | | __tcp_transmit_skb
>
> Fix this by replacing bp_desc.refs with a per-cpu bp_refs.
>
> Before the patch, on a host with 240 cores (480 threads):
>
> sysctl -wq kernel.bpf_stats_enabled=0
>
> text_poke_bp_batch(nr_entries=164) : Took 2655300 usec
>
> bpftool prog | grep run_time_ns
> ...
> 105: sched_cls name hn_egress tag 699fc5eea64144e3 gpl run_time_ns
> 3009063719 run_cnt 82757845 : average cost is 36 nsec per call
>
> After this patch:
>
> sysctl -wq kernel.bpf_stats_enabled=0
>
> text_poke_bp_batch(nr_entries=164) : Took 702 usec
>
> $ bpftool prog | grep run_time_ns
> ...
> 105: sched_cls name hn_egress tag 699fc5eea64144e3 gpl run_time_ns
> 1928223019 run_cnt 67682728 : average cost is 28 nsec per call
>
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> ---
> arch/x86/kernel/alternative.c | 30 ++++++++++++++++++------------
> 1 file changed, 18 insertions(+), 12 deletions(-)
Thanks for the updates. I've further improved the changelog (see
attached below), and have tentatively applied it to
tip:x86/alternatives.
Thanks,
Ingo
==============================>
From: Eric Dumazet <edumazet@...gle.com>
Date: Tue, 25 Mar 2025 04:33:16 +0000
Subject: [PATCH] x86/alternatives: Improve code-patching scalability by removing false sharing in poke_int3_handler()
eBPF programs can be run 50,000,000 times per second on busy servers.
Whenever /proc/sys/kernel/bpf_stats_enabled is turned off,
hundreds of calls sites are patched from text_poke_bp_batch()
and we see a huge loss of performance due to false sharing
on bp_desc.refs lasting up to three seconds.
51.30% server_bin [kernel.kallsyms] [k] poke_int3_handler
|
|--46.45%--poke_int3_handler
| exc_int3
| asm_exc_int3
| |
| |--24.26%--cls_bpf_classify
| | tcf_classify
| | __dev_queue_xmit
| | ip6_finish_output2
| | ip6_output
| | ip6_xmit
| | inet6_csk_xmit
| | __tcp_transmit_skb
Fix this by replacing bp_desc.refs with a per-cpu bp_refs.
Before the patch, on a host with 240 cores (480 threads):
$ sysctl -wq kernel.bpf_stats_enabled=0
text_poke_bp_batch(nr_entries=164) : Took 2655300 usec
$ bpftool prog | grep run_time_ns
...
105: sched_cls name hn_egress tag 699fc5eea64144e3 gpl run_time_ns
3009063719 run_cnt 82757845 : average cost is 36 nsec per call
After this patch:
$ sysctl -wq kernel.bpf_stats_enabled=0
text_poke_bp_batch(nr_entries=164) : Took 702 usec
$ bpftool prog | grep run_time_ns
...
105: sched_cls name hn_egress tag 699fc5eea64144e3 gpl run_time_ns
1928223019 run_cnt 67682728 : average cost is 28 nsec per call
Ie. text-patching performance improved 3700x: from 2.65 seconds
to 0.0007 seconds.
Since the atomic_cond_read_acquire(refs, !VAL) spin-loop was not triggered
even once in my tests, add an unlikely() annotation, because this appears
to be the common case.
[ mingo: Improved the changelog some more. ]
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Cc: Brian Gerst <brgerst@...il.com>
Cc: Juergen Gross <jgross@...e.com>
Cc: H. Peter Anvin <hpa@...or.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Kees Cook <keescook@...omium.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Josh Poimboeuf <jpoimboe@...hat.com>
Link: https://lore.kernel.org/r/20250325043316.874518-1-edumazet@google.com
Powered by blists - more mailing lists