lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b849aa68-0f7e-455f-ba09-ff1c811db771@linux.dev>
Date: Mon, 18 Mar 2024 09:07:54 -0700
From: Yonghong Song <yonghong.song@...ux.dev>
To: Jesper Dangaard Brouer <hawk@...nel.org>, bpf@...r.kernel.org,
 Daniel Borkmann <borkmann@...earbox.net>
Cc: Alexei Starovoitov <ast@...nel.org>, martin.lau@...nel.org,
 netdev@...r.kernel.org, bp@...en8.de, kernel-team@...udflare.com
Subject: Re: [PATCH bpf-next V2] bpf/lpm_trie: inline longest_prefix_match for
 fastpath


On 3/18/24 6:25 AM, Jesper Dangaard Brouer wrote:
> The BPF map type LPM (Longest Prefix Match) is used heavily
> in production by multiple products that have BPF components.
> Perf data shows trie_lookup_elem() and longest_prefix_match()
> being part of kernels perf top.
>
> For every level in the LPM tree trie_lookup_elem() calls out
> to longest_prefix_match().  The compiler is free to inline this
> call, but chooses not to inline, because other slowpath callers
> (that can be invoked via syscall) exists like trie_update_elem(),
> trie_delete_elem() or trie_get_next_key().
>
>   bcc/tools/funccount -Ti 1 'trie_lookup_elem|longest_prefix_match.isra.0'
>   FUNC                                    COUNT
>   trie_lookup_elem                       664945
>   longest_prefix_match.isra.0           8101507
>
> Observation on a single random machine shows a factor 12 between
> the two functions. Given an average of 12 levels in the trie being
> searched.
>
> This patch force inlining longest_prefix_match(), but only for
> the lookup fastpath to balance object instruction size.
>
> In production with AMD CPUs, measuring the function latency of
> 'trie_lookup_elem' (bcc/tools/funclatency) we are seeing an improvement
> function latency reduction 7-8% with this patch applied (to production
> kernels 6.6 and 6.1). Analyzing perf data, we can explain this rather
> large improvement due to reducing the overhead for AMD side-channel
> mitigation SRSO (Speculative Return Stack Overflow).
>
> Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
> Signed-off-by: Jesper Dangaard Brouer <hawk@...nel.org>

I checked out internal PGO (Profile-Guided Optimization) kernel and
it did exactly like the above described: longest_prefix_match() is inlined
to trie_lookup_elem(), but not others.

Acked-by: Yonghong Song <yonghong.song@...ux.dev>

> ---
>   kernel/bpf/lpm_trie.c |   18 +++++++++++++-----
>   1 file changed, 13 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
> index 050fe1ebf0f7..939620b91c0e 100644
> --- a/kernel/bpf/lpm_trie.c
> +++ b/kernel/bpf/lpm_trie.c
> @@ -155,16 +155,17 @@ static inline int extract_bit(const u8 *data, size_t index)
>   }
>   
>   /**
> - * longest_prefix_match() - determine the longest prefix
> + * __longest_prefix_match() - determine the longest prefix
>    * @trie:	The trie to get internal sizes from
>    * @node:	The node to operate on
>    * @key:	The key to compare to @node
>    *
>    * Determine the longest prefix of @node that matches the bits in @key.
>    */
> -static size_t longest_prefix_match(const struct lpm_trie *trie,
> -				   const struct lpm_trie_node *node,
> -				   const struct bpf_lpm_trie_key_u8 *key)
> +static __always_inline
> +size_t __longest_prefix_match(const struct lpm_trie *trie,
> +			      const struct lpm_trie_node *node,
> +			      const struct bpf_lpm_trie_key_u8 *key)
>   {
>   	u32 limit = min(node->prefixlen, key->prefixlen);
>   	u32 prefixlen = 0, i = 0;
> @@ -224,6 +225,13 @@ static size_t longest_prefix_match(const struct lpm_trie *trie,
>   	return prefixlen;
>   }
>   
> +static size_t longest_prefix_match(const struct lpm_trie *trie,
> +				   const struct lpm_trie_node *node,
> +				   const struct bpf_lpm_trie_key_u8 *key)
> +{
> +	return __longest_prefix_match(trie, node, key);
> +}
> +
>   /* Called from syscall or from eBPF program */
>   static void *trie_lookup_elem(struct bpf_map *map, void *_key)
>   {
> @@ -245,7 +253,7 @@ static void *trie_lookup_elem(struct bpf_map *map, void *_key)
>   		 * If it's the maximum possible prefix for this trie, we have
>   		 * an exact match and can return it directly.
>   		 */
> -		matchlen = longest_prefix_match(trie, node, key);
> +		matchlen = __longest_prefix_match(trie, node, key);
>   		if (matchlen == trie->max_prefixlen) {
>   			found = node;
>   			break;
>
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ