[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ed7920365daf5eff1c82892b57e918d3db786ac7.1701193577.git.dxu@dxuuu.xyz>
Date: Tue, 28 Nov 2023 10:54:23 -0700
From: Daniel Xu <dxu@...uu.xyz>
To: ndesaulniers@...gle.com, andrii@...nel.org, nathan@...nel.org,
daniel@...earbox.net, ast@...nel.org, steffen.klassert@...unet.com,
antony.antony@...unet.com, alexei.starovoitov@...il.com,
yonghong.song@...ux.dev, eddyz87@...il.com
Cc: martin.lau@...ux.dev, song@...nel.org, john.fastabend@...il.com,
kpsingh@...nel.org, sdf@...gle.com, haoluo@...gle.com,
jolsa@...nel.org, trix@...hat.com, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org, llvm@...ts.linux.dev,
devel@...ux-ipsec.org, netdev@...r.kernel.org
Subject: [PATCH ipsec-next v2 3/6] libbpf: Add BPF_CORE_WRITE_BITFIELD() macro
Similar to reading from CO-RE bitfields, we need a CO-RE aware bitfield
writing wrapper to make the verifier happy.
Two alternatives to this approach are:
1. Use the upcoming `preserve_static_offset` [0] attribute to disable
CO-RE on specific structs.
2. Use broader byte-sized writes to write to bitfields.
(1) is a bit a bit hard to use. It requires specific and
not-very-obvious annotations to bpftool generated vmlinux.h. It's also
not generally available in released LLVM versions yet.
(2) makes the code quite hard to read and write. And especially if
BPF_CORE_READ_BITFIELD() is already being used, it makes more sense to
to have an inverse helper for writing.
[0]: https://reviews.llvm.org/D133361
From: Eduard Zingerman <eddyz87@...il.com>
Signed-off-by: Daniel Xu <dxu@...uu.xyz>
---
tools/lib/bpf/bpf_core_read.h | 36 +++++++++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)
diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
index 1ac57bb7ac55..7a764f65d299 100644
--- a/tools/lib/bpf/bpf_core_read.h
+++ b/tools/lib/bpf/bpf_core_read.h
@@ -111,6 +111,42 @@ enum bpf_enum_value_kind {
val; \
})
+/*
+ * Write to a bitfield, identified by s->field.
+ * This is the inverse of BPF_CORE_WRITE_BITFIELD().
+ */
+#define BPF_CORE_WRITE_BITFIELD(s, field, new_val) ({ \
+ void *p = (void *)s + __CORE_RELO(s, field, BYTE_OFFSET); \
+ unsigned int byte_size = __CORE_RELO(s, field, BYTE_SIZE); \
+ unsigned int lshift = __CORE_RELO(s, field, LSHIFT_U64); \
+ unsigned int rshift = __CORE_RELO(s, field, RSHIFT_U64); \
+ unsigned int bit_size = (rshift - lshift); \
+ unsigned long long nval, val, hi, lo; \
+ \
+ asm volatile("" : "+r"(p)); \
+ \
+ switch (byte_size) { \
+ case 1: val = *(unsigned char *)p; break; \
+ case 2: val = *(unsigned short *)p; break; \
+ case 4: val = *(unsigned int *)p; break; \
+ case 8: val = *(unsigned long long *)p; break; \
+ } \
+ hi = val >> (bit_size + rshift); \
+ hi <<= bit_size + rshift; \
+ lo = val << (bit_size + lshift); \
+ lo >>= bit_size + lshift; \
+ nval = new_val; \
+ nval <<= lshift; \
+ nval >>= rshift; \
+ val = hi | nval | lo; \
+ switch (byte_size) { \
+ case 1: *(unsigned char *)p = val; break; \
+ case 2: *(unsigned short *)p = val; break; \
+ case 4: *(unsigned int *)p = val; break; \
+ case 8: *(unsigned long long *)p = val; break; \
+ } \
+})
+
#define ___bpf_field_ref1(field) (field)
#define ___bpf_field_ref2(type, field) (((typeof(type) *)0)->field)
#define ___bpf_field_ref(args...) \
--
2.42.1
Powered by blists - more mailing lists