[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240813115334.3922580-2-ruanjinjie@huawei.com>
Date: Tue, 13 Aug 2024 19:53:32 +0800
From: Jinjie Ruan <ruanjinjie@...wei.com>
To: <naveen@...nel.org>, <anil.s.keshavamurthy@...el.com>,
<davem@...emloft.net>, <mhiramat@...nel.org>, <kees@...nel.org>,
<gustavoars@...nel.org>, <linux-kernel@...r.kernel.org>,
<linux-trace-kernel@...r.kernel.org>, <linux-hardening@...r.kernel.org>
CC: <ruanjinjie@...wei.com>
Subject: [PATCH 1/3] kprobes: Annotate structs with __counted_by()
Add the __counted_by compiler attribute to the flexible array member
stripes to improve access bounds-checking via CONFIG_UBSAN_BOUNDS and
CONFIG_FORTIFY_SOURCE.
Signed-off-by: Jinjie Ruan <ruanjinjie@...wei.com>
---
kernel/kprobes.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index da59c68df841..e6f7b0d3b29c 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -92,7 +92,7 @@ struct kprobe_insn_page {
struct kprobe_insn_cache *cache;
int nused;
int ngarbage;
- char slot_used[];
+ char slot_used[] __counted_by(nused);
};
#define KPROBE_INSN_PAGE_SIZE(slots) \
--
2.34.1
Powered by blists - more mailing lists