lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2c2d1b85-9c4a-5122-c471-e4a729b4df03@iogearbox.net>
Date: Fri, 15 Mar 2024 16:03:21 +0100
From: Daniel Borkmann <daniel@...earbox.net>
To: Jesper Dangaard Brouer <hawk@...nel.org>, bpf@...r.kernel.org
Cc: Alexei Starovoitov <ast@...nel.org>,
 Daniel Borkmann <borkmann@...earbox.net>, martin.lau@...nel.org,
 netdev@...r.kernel.org, kernel-team@...udflare.com
Subject: Re: [PATCH bpf-next] bpf/lpm_trie: inline longest_prefix_match for
 fastpath

On 3/12/24 4:17 PM, Jesper Dangaard Brouer wrote:
> The BPF map type LPM (Longest Prefix Match) is used heavily
> in production by multiple products that have BPF components.
> Perf data shows trie_lookup_elem() and longest_prefix_match()
> being part of kernels perf top.

You mention these are heavy hitters in prod ...

> For every level in the LPM tree trie_lookup_elem() calls out
> to longest_prefix_match().  The compiler is free to inline this
> call, but chooses not to inline, because other slowpath callers
> (that can be invoked via syscall) exists like trie_update_elem(),
> trie_delete_elem() or trie_get_next_key().
> 
>   bcc/tools/funccount -Ti 1 'trie_lookup_elem|longest_prefix_match.isra.0'
>   FUNC                                    COUNT
>   trie_lookup_elem                       664945
>   longest_prefix_match.isra.0           8101507
> 
> Observation on a single random metal shows a factor 12 between
> the two functions. Given an average of 12 levels in the trie being
> searched.
> 
> This patch force inlining longest_prefix_match(), but only for
> the lookup fastpath to balance object instruction size.
> 
>   $ bloat-o-meter kernel/bpf/lpm_trie.o.orig-noinline kernel/bpf/lpm_trie.o
>   add/remove: 1/1 grow/shrink: 1/0 up/down: 335/-4 (331)
>   Function                                     old     new   delta
>   trie_lookup_elem                             179     510    +331
>   __BTF_ID__struct__lpm_trie__706741             -       4      +4
>   __BTF_ID__struct__lpm_trie__706733             4       -      -4
>   Total: Before=3056, After=3387, chg +10.83%

... and here you quote bloat-o-meter instead. But do you also see an
observable perf gain in prod after this change? (No objection from my
side but might be good to mention here.. given if not then why do the
change?)

> Details: Due to AMD mitigation for SRSO (Speculative Return Stack Overflow)
> these function calls have additional overhead. On newer kernels this shows
> up under srso_safe_ret() + srso_return_thunk(), and on older kernels (6.1)
> under __x86_return_thunk(). Thus, for production workloads the biggest gain
> comes from avoiding this mitigation overhead.
> 
> Signed-off-by: Jesper Dangaard Brouer <hawk@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ