[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210427170859.579924-1-jackmanb@google.com>
Date: Tue, 27 Apr 2021 17:08:59 +0000
From: Brendan Jackman <jackmanb@...gle.com>
To: bpf@...r.kernel.org
Cc: ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
linux-kernel@...r.kernel.org, Brendan Jackman <jackmanb@...gle.com>
Subject: [PATCH bpf-next] libbpf: Fix signed overflow in ringbuf_process_ring
One of our benchmarks running in (Google-internal) CI pushes data
through the ringbuf faster than userspace is able to consume
it. In this case it seems we're actually able to get >INT_MAX entries
in a single ringbuf_buffer__consume call. ASAN detected that cnt
overflows in this case.
Fix by just setting a limit on the number of entries that can be
consumed.
Fixes: bf99c936f947 (libbpf: Add BPF ring buffer support)
Signed-off-by: Brendan Jackman <jackmanb@...gle.com>
---
tools/lib/bpf/ringbuf.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
index e7a8d847161f..445a21df0934 100644
--- a/tools/lib/bpf/ringbuf.c
+++ b/tools/lib/bpf/ringbuf.c
@@ -213,8 +213,8 @@ static int ringbuf_process_ring(struct ring* r)
do {
got_new_data = false;
prod_pos = smp_load_acquire(r->producer_pos);
- while (cons_pos < prod_pos) {
+ /* Don't read more than INT_MAX, or the return vale won't make sense. */
+ while (cons_pos < prod_pos && cnt < INT_MAX) {
len_ptr = r->data + (cons_pos & r->mask);
len = smp_load_acquire(len_ptr);
--
2.31.1.498.g6c1eba8ee3d-goog
Powered by blists - more mailing lists