[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200710173123.427983-1-jakub@cloudflare.com>
Date: Fri, 10 Jul 2020 19:31:23 +0200
From: Jakub Sitnicki <jakub@...udflare.com>
To: bpf@...r.kernel.org
Cc: netdev@...r.kernel.org, kernel-team@...udflare.com,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>
Subject: [PATCH bpf] bpf: Shift and mask loads narrower than context field size
When size of load from context is the same as target field size, but less
than context field size, the verifier does not emit the shift and mask
instructions for loads at non-zero offset.
This has the unexpected effect of loading the same data no matter what the
offset was. While the expected behavior would be to load zeros for offsets
that are greater than target field size.
For instance, u16 load from u32 context field backed by u16 target field at
an offset of 2 bytes results in:
SEC("sk_reuseport/narrow_half")
int reuseport_narrow_half(struct sk_reuseport_md *ctx)
{
__u16 *half;
half = (__u16 *)&ctx->ip_protocol;
if (half[0] == 0xaaaa)
return SK_DROP;
if (half[1] == 0xbbbb)
return SK_DROP;
return SK_PASS;
}
int reuseport_narrow_half(struct sk_reuseport_md * ctx):
; int reuseport_narrow_half(struct sk_reuseport_md *ctx)
0: (b4) w0 = 0
; if (half[0] == 0xaaaa)
1: (79) r2 = *(u64 *)(r1 +8)
2: (69) r2 = *(u16 *)(r2 +924)
; if (half[0] == 0xaaaa)
3: (16) if w2 == 0xaaaa goto pc+5
; if (half[1] == 0xbbbb)
4: (79) r1 = *(u64 *)(r1 +8)
5: (69) r1 = *(u16 *)(r1 +924)
6: (b4) w0 = 1
; if (half[1] == 0xbbbb)
7: (56) if w1 != 0xbbbb goto pc+1
8: (b4) w0 = 0
; }
9: (95) exit
In this case half[0] == half[1] == sk->sk_protocol that backs the
ctx->ip_protocol field.
Fix it by shifting and masking any load from context that is narrower than
context field size (is_narrower_load = size < ctx_field_size), in addition
to loads that are narrower than target field size.
The "size < target_size" check is left in place to cover the case when a
context field is narrower than its target field, even if we might not have
such case now. (It would have to be a u32 context field backed by a u64
target field, with context fields all being 4-bytes or wider.)
Going back to the example, with the fix in place, the upper half load from
ctx->ip_protocol yields zero:
int reuseport_narrow_half(struct sk_reuseport_md * ctx):
; int reuseport_narrow_half(struct sk_reuseport_md *ctx)
0: (b4) w0 = 0
; if (half[0] == 0xaaaa)
1: (79) r2 = *(u64 *)(r1 +8)
2: (69) r2 = *(u16 *)(r2 +924)
3: (54) w2 &= 65535
; if (half[0] == 0xaaaa)
4: (16) if w2 == 0xaaaa goto pc+7
; if (half[1] == 0xbbbb)
5: (79) r1 = *(u64 *)(r1 +8)
6: (69) r1 = *(u16 *)(r1 +924)
7: (74) w1 >>= 16
8: (54) w1 &= 65535
9: (b4) w0 = 1
; if (half[1] == 0xbbbb)
10: (56) if w1 != 0xbbbb goto pc+1
11: (b4) w0 = 0
; }
12: (95) exit
Fixes: f96da09473b5 ("bpf: simplify narrower ctx access")
Signed-off-by: Jakub Sitnicki <jakub@...udflare.com>
---
kernel/bpf/verifier.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 94cead5a43e5..1c4d0e24a5a2 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -9760,7 +9760,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
return -EINVAL;
}
- if (is_narrower_load && size < target_size) {
+ if (is_narrower_load || size < target_size) {
u8 shift = bpf_ctx_narrow_access_offset(
off, size, size_default) * 8;
if (ctx_field_size <= 4) {
--
2.25.4
Powered by blists - more mailing lists