lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 8 Dec 2020 17:13:01 -0800 From: paulmck@...nel.org To: rcu@...r.kernel.org Cc: linux-kernel@...r.kernel.org, kernel-team@...com, mingo@...nel.org, jiangshanlai@...il.com, akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com, josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org, rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com, oleg@...hat.com, joel@...lfernandes.org, iamjoonsoo.kim@....com, andrii@...nel.org, "Paul E. McKenney" <paulmck@...nel.org>, linux-mm@...ck.org Subject: [PATCH v2 sl-b 3/5] mm: Make mem_dump_obj() handle vmalloc() memory From: "Paul E. McKenney" <paulmck@...nel.org> This commit adds vmalloc() support to mem_dump_obj(). Note that the vmalloc_dump_obj() function combines the checking and dumping, in contrast with the split between kmem_valid_obj() and kmem_dump_obj(). The reason for the difference is that the checking in the vmalloc() case involves acquiring a global lock, and redundant acquisitions of global locks should be avoided, even on not-so-fast paths. Note that this change causes on-stack variables to be reported as vmalloc() storage from kernel_clone() or similar, depending on the degree of inlining that your compiler does. This is likely more helpful than the earlier "non-paged (local) memory". Cc: Andrew Morton <akpm@...ux-foundation.org> Cc: Joonsoo Kim <iamjoonsoo.kim@....com> Cc: <linux-mm@...ck.org> Reported-by: Andrii Nakryiko <andrii@...nel.org> Signed-off-by: Paul E. McKenney <paulmck@...nel.org> --- include/linux/vmalloc.h | 6 ++++++ mm/util.c | 12 +++++++----- mm/vmalloc.c | 12 ++++++++++++ 3 files changed, 25 insertions(+), 5 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 938eaf9..c89c2be 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -248,4 +248,10 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms) int register_vmap_purge_notifier(struct notifier_block *nb); int unregister_vmap_purge_notifier(struct notifier_block *nb); +#ifdef CONFIG_MMU +bool vmalloc_dump_obj(void *object); +#else +static inline bool vmalloc_dump_obj(void *object) { return false; } +#endif + #endif /* _LINUX_VMALLOC_H */ diff --git a/mm/util.c b/mm/util.c index 8c2449f..ee99a0a 100644 --- a/mm/util.c +++ b/mm/util.c @@ -984,6 +984,12 @@ int __weak memcmp_pages(struct page *page1, struct page *page2) */ void mem_dump_obj(void *object) { + if (kmem_valid_obj(object)) { + kmem_dump_obj(object); + return; + } + if (vmalloc_dump_obj(object)) + return; if (!virt_addr_valid(object)) { if (object == NULL) pr_cont(" NULL pointer.\n"); @@ -993,10 +999,6 @@ void mem_dump_obj(void *object) pr_cont(" non-paged (local) memory.\n"); return; } - if (kmem_valid_obj(object)) { - kmem_dump_obj(object); - return; - } - pr_cont(" non-slab memory.\n"); + pr_cont(" non-slab/vmalloc memory.\n"); } EXPORT_SYMBOL_GPL(mem_dump_obj); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 6ae491a..7421719 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3431,6 +3431,18 @@ void pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms) } #endif /* CONFIG_SMP */ +bool vmalloc_dump_obj(void *object) +{ + struct vm_struct *vm; + void *objp = (void *)PAGE_ALIGN((unsigned long)object); + + vm = find_vm_area(objp); + if (!vm) + return false; + pr_cont(" vmalloc allocated at %pS\n", vm->caller); + return true; +} + #ifdef CONFIG_PROC_FS static void *s_start(struct seq_file *m, loff_t *pos) __acquires(&vmap_purge_lock) -- 2.9.5
Powered by blists - more mailing lists