[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250422-afabre-traits-010-rfc2-v2-7-92bcc6b146c9@arthurfabre.com>
Date: Tue, 22 Apr 2025 15:23:36 +0200
From: Arthur Fabre <arthur@...hurfabre.com>
To: netdev@...r.kernel.org, bpf@...r.kernel.org
Cc: jakub@...udflare.com, hawk@...nel.org, yan@...udflare.com,
jbrandeburg@...udflare.com, thoiland@...hat.com, lbiancon@...hat.com,
ast@...nel.org, kuba@...nel.org, edumazet@...gle.com,
Arthur Fabre <arthur@...hurfabre.com>
Subject: [PATCH RFC bpf-next v2 07/17] trait: Replace memmove calls with
inline move
When inserting or deleting traits, we need to move any subsequent
traits over.
Replace it with an inline implementation to avoid the function call
overhead. This is especially expensive on AMD with SRSO.
In practice we shouldn't have too much data to move around, and we're
naturally limited to 238 bytes max, so a dumb implementation should
hopefully be fast enough.
Jesper Brouer kindly ran benchmarks on real hardware with three configs:
- Intel: E5-1650 v4
- AMD SRSO: 9684X SRSO
- AMD IBPB: 9684X SRSO=IBPB
Intel AMD IBPB AMD SRSO
xdp-trait-get 5.530 3.901 9.188 (ns/op)
xdp-trait-set 7.538 4.941 10.050 (ns/op)
xdp-trait-move 14.245 8.865 14.834 (ns/op)
function call 1.319 1.359 5.703 (ns/op)
indirect call 8.922 6.251 10.329 (ns/op)
Signed-off-by: Arthur Fabre <arthur@...hurfabre.com>
---
include/net/trait.h | 40 ++++++++++++++++++++++++++++++++++++----
1 file changed, 36 insertions(+), 4 deletions(-)
diff --git a/include/net/trait.h b/include/net/trait.h
index 4013351549731c4e3bede211dbe9fbe651556dc9..1fc5f773ab9af689ac0f6e29fd3c1e62c04cfff8 100644
--- a/include/net/trait.h
+++ b/include/net/trait.h
@@ -74,6 +74,40 @@ static __always_inline int __trait_offset(struct __trait_hdr h, u64 key)
return sizeof(struct __trait_hdr) + __trait_total_length(__trait_and(h, ~(~0llu << key)));
}
+/* Avoid overhead of memmove() function call when possible. */
+static __always_inline void __trait_move(void *src, int off, size_t n)
+{
+ if (n == 0)
+ return;
+
+ if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) || BITS_PER_LONG != 64) {
+ memmove(src + off, src, n);
+ return;
+ }
+
+ /* Need to move in reverse to handle overlap. */
+ if (off > 0)
+ src += n;
+
+#define ___trait_move(op) do { \
+ src -= (off > 0) ? sizeof(u##op) : 0; \
+ *(u##op *)(src + off) = *(u##op *)src; \
+ src += (off < 0) ? sizeof(u##op) : 0; \
+ } while (0)
+
+ for (int w = 0; w < n / 8; w++)
+ ___trait_move(64);
+
+ if (n & 4)
+ ___trait_move(32);
+
+ if (n & 2)
+ ___trait_move(16);
+
+ if (n & 1)
+ ___trait_move(8);
+}
+
/**
* traits_init() - Initialize a trait store.
* @traits: Start of trait store area.
@@ -141,8 +175,7 @@ int trait_set(void *traits, void *hard_end, u64 key, const void *val, u64 len, u
return -ENOSPC;
/* Memmove all the kvs after us over. */
- if (traits_size(traits) > off)
- memmove(traits + off + len, traits + off, traits_size(traits) - off);
+ __trait_move(traits + off, len, traits_size(traits) - off);
}
u64 encode_len = 0;
@@ -258,8 +291,7 @@ static __always_inline int trait_del(void *traits, u64 key)
int len = __trait_total_length(__trait_and(*h, (1ull << key)));
/* Memmove all the kvs after us over */
- if (traits_size(traits) > off + len)
- memmove(traits + off, traits + off + len, traits_size(traits) - off - len);
+ __trait_move(traits + off + len, -len, traits_size(traits) - off - len);
/* Clear our length in header */
h->high &= ~(1ull << key);
--
2.43.0
Powered by blists - more mailing lists