[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d98bf550-367d-0744-025a-52307248ec82@suse.cz>
Date: Mon, 23 Sep 2019 10:20:59 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Andrey Ryabinin <aryabinin@...tuozzo.com>,
Walter Wu <walter-zh.wu@...iatek.com>
Cc: Qian Cai <cai@....pw>, Alexander Potapenko <glider@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Matthias Brugger <matthias.bgg@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Martin Schwidefsky <schwidefsky@...ibm.com>,
Andrey Konovalov <andreyknvl@...gle.com>,
Arnd Bergmann <arnd@...db.de>, linux-kernel@...r.kernel.org,
kasan-dev@...glegroups.com, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org,
linux-mediatek@...ts.infradead.org, wsd_upstream@...iatek.com
Subject: [PATCH] mm, debug, kasan: save and dump freeing stack trace for kasan
On 9/16/19 5:57 PM, Andrey Ryabinin wrote:
> I'd rather keep all logic in one place, i.e. "if (!page_owner_disabled && (IS_ENABLED(CONFIG_KASAN) || debug_pagealloc_enabled())"
> With this no changes in early_debug_pagealloc() required and CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT=y should also work correctly.
OK.
----8<----
>From 7437c43f02682fdde5680fa83e87029f7529e222 Mon Sep 17 00:00:00 2001
From: Vlastimil Babka <vbabka@...e.cz>
Date: Mon, 16 Sep 2019 11:28:19 +0200
Subject: [PATCH] mm, debug, kasan: save and dump freeing stack trace for kasan
The commit "mm, page_owner, debug_pagealloc: save and dump freeing stack trace"
enhanced page_owner to also store freeing stack trace, when debug_pagealloc is
also enabled. KASAN would also like to do this [1] to improve error reports to
debug e.g. UAF issues. This patch therefore introduces a helper config option
PAGE_OWNER_FREE_STACK, which is enabled when PAGE_OWNER and either of
DEBUG_PAGEALLOC or KASAN is enabled. Boot-time, the free stack saving is
enabled when booting a KASAN kernel with page_owner=on, or non-KASAN kernel
with debug_pagealloc=on and page_owner=on.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=203967
Suggested-by: Dmitry Vyukov <dvyukov@...gle.com>
Suggested-by: Walter Wu <walter-zh.wu@...iatek.com>
Suggested-by: Andrey Ryabinin <aryabinin@...tuozzo.com>
Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
---
Documentation/dev-tools/kasan.rst | 4 ++++
mm/Kconfig.debug | 4 ++++
mm/page_owner.c | 31 ++++++++++++++++++-------------
3 files changed, 26 insertions(+), 13 deletions(-)
diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index b72d07d70239..434e605030e9 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -41,6 +41,10 @@ smaller binary while the latter is 1.1 - 2 times faster.
Both KASAN modes work with both SLUB and SLAB memory allocators.
For better bug detection and nicer reporting, enable CONFIG_STACKTRACE.
+To augment reports with last allocation and freeing stack of the physical
+page, it is recommended to configure kernel also with CONFIG_PAGE_OWNER = y
+and boot with page_owner=on.
+
To disable instrumentation for specific files or directories, add a line
similar to the following to the respective kernel Makefile:
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 327b3ebf23bf..1ea247da3322 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -62,6 +62,10 @@ config PAGE_OWNER
If unsure, say N.
+config PAGE_OWNER_FREE_STACK
+ def_bool KASAN || DEBUG_PAGEALLOC
+ depends on PAGE_OWNER
+
config PAGE_POISONING
bool "Poison pages after freeing"
select PAGE_POISONING_NO_SANITY if HIBERNATION
diff --git a/mm/page_owner.c b/mm/page_owner.c
index dee931184788..8b6b05676158 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -24,13 +24,14 @@ struct page_owner {
short last_migrate_reason;
gfp_t gfp_mask;
depot_stack_handle_t handle;
-#ifdef CONFIG_DEBUG_PAGEALLOC
+#ifdef CONFIG_PAGE_OWNER_FREE_STACK
depot_stack_handle_t free_handle;
#endif
};
static bool page_owner_disabled = true;
DEFINE_STATIC_KEY_FALSE(page_owner_inited);
+static DEFINE_STATIC_KEY_FALSE(page_owner_free_stack);
static depot_stack_handle_t dummy_handle;
static depot_stack_handle_t failure_handle;
@@ -91,6 +92,8 @@ static void init_page_owner(void)
register_failure_stack();
register_early_stack();
static_branch_enable(&page_owner_inited);
+ if (IS_ENABLED(CONFIG_KASAN) || debug_pagealloc_enabled())
+ static_branch_enable(&page_owner_free_stack);
init_early_allocated_pages();
}
@@ -148,11 +151,11 @@ void __reset_page_owner(struct page *page, unsigned int order)
{
int i;
struct page_ext *page_ext;
-#ifdef CONFIG_DEBUG_PAGEALLOC
+#ifdef CONFIG_PAGE_OWNER_FREE_STACK
depot_stack_handle_t handle = 0;
struct page_owner *page_owner;
- if (debug_pagealloc_enabled())
+ if (static_branch_unlikely(&page_owner_free_stack))
handle = save_stack(GFP_NOWAIT | __GFP_NOWARN);
#endif
@@ -161,8 +164,8 @@ void __reset_page_owner(struct page *page, unsigned int order)
if (unlikely(!page_ext))
continue;
__clear_bit(PAGE_EXT_OWNER_ACTIVE, &page_ext->flags);
-#ifdef CONFIG_DEBUG_PAGEALLOC
- if (debug_pagealloc_enabled()) {
+#ifdef CONFIG_PAGE_OWNER_FREE_STACK
+ if (static_branch_unlikely(&page_owner_free_stack)) {
page_owner = get_page_owner(page_ext);
page_owner->free_handle = handle;
}
@@ -451,14 +454,16 @@ void __dump_page_owner(struct page *page)
stack_trace_print(entries, nr_entries, 0);
}
-#ifdef CONFIG_DEBUG_PAGEALLOC
- handle = READ_ONCE(page_owner->free_handle);
- if (!handle) {
- pr_alert("page_owner free stack trace missing\n");
- } else {
- nr_entries = stack_depot_fetch(handle, &entries);
- pr_alert("page last free stack trace:\n");
- stack_trace_print(entries, nr_entries, 0);
+#ifdef CONFIG_PAGE_OWNER_FREE_STACK
+ if (static_branch_unlikely(&page_owner_free_stack)) {
+ handle = READ_ONCE(page_owner->free_handle);
+ if (!handle) {
+ pr_alert("page_owner free stack trace missing\n");
+ } else {
+ nr_entries = stack_depot_fetch(handle, &entries);
+ pr_alert("page last free stack trace:\n");
+ stack_trace_print(entries, nr_entries, 0);
+ }
}
#endif
--
2.23.0
Powered by blists - more mailing lists