[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210626181156.1873604-1-rkovhaev@gmail.com>
Date: Sat, 26 Jun 2021 11:11:56 -0700
From: Rustam Kovhaev <rkovhaev@...il.com>
To: ast@...nel.org, andrii@...nel.org, daniel@...earbox.net,
kafai@...com, songliubraving@...com, yhs@...com,
john.fastabend@...il.com, kpsingh@...nel.org
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org, Rustam Kovhaev <rkovhaev@...il.com>
Subject: [PATCH] bpf: fix false positive kmemleak report in bpf_ringbuf_area_alloc()
kmemleak scans struct page, but it does not scan the page content.
if we allocate some memory with kmalloc(), then allocate page with
alloc_page(), and if we put kmalloc pointer somewhere inside that page,
kmemleak will report kmalloc pointer as a false positive.
we can instruct kmemleak to scan the memory area by calling
kmemleak_alloc()/kmemleak_free(), but part of struct bpf_ringbuf is
mmaped to user space, and if struct bpf_ringbuf changes we would have to
revisit and review size argument in kmemleak_alloc(), because we do not
want kmemleak to scan the user space memory.
let's simplify things and use kmemleak_not_leak() here.
Link: https://lore.kernel.org/lkml/YNTAqiE7CWJhOK2M@nuc10/
Link: https://lore.kernel.org/lkml/20210615101515.GC26027@arm.com/
Link: https://syzkaller.appspot.com/bug?extid=5d895828587f49e7fe9b
Reported-and-tested-by: syzbot+5d895828587f49e7fe9b@...kaller.appspotmail.com
Signed-off-by: Rustam Kovhaev <rkovhaev@...il.com>
---
kernel/bpf/ringbuf.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
index 84b3b35fc0d0..9e0c10c6892a 100644
--- a/kernel/bpf/ringbuf.c
+++ b/kernel/bpf/ringbuf.c
@@ -8,6 +8,7 @@
#include <linux/vmalloc.h>
#include <linux/wait.h>
#include <linux/poll.h>
+#include <linux/kmemleak.h>
#include <uapi/linux/btf.h>
#define RINGBUF_CREATE_FLAG_MASK (BPF_F_NUMA_NODE)
@@ -105,6 +106,7 @@ static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node)
rb = vmap(pages, nr_meta_pages + 2 * nr_data_pages,
VM_ALLOC | VM_USERMAP, PAGE_KERNEL);
if (rb) {
+ kmemleak_not_leak(pages);
rb->pages = pages;
rb->nr_pages = nr_pages;
return rb;
--
2.30.2
Powered by blists - more mailing lists