lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 22 Nov 2022 23:05:27 +0000
From:   "Zhang, Qiang1" <qiang1.zhang@...el.com>
To:     "Zhang, Qiang1" <qiang1.zhang@...el.com>,
        "paulmck@...nel.org" <paulmck@...nel.org>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
        "thunder.leizhen@...wei.com" <thunder.leizhen@...wei.com>,
        "frederic@...nel.org" <frederic@...nel.org>,
        "joel@...lfernandes.org" <joel@...lfernandes.org>
CC:     "rcu@...r.kernel.org" <rcu@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v3] mm: Make vmalloc_dump_obj() call in clean context


Gently ping  😊

Thanks
Zqiang

>Currently, the mem_dump_obj() is invoked in call_rcu(), the
>call_rcu() is maybe invoked in non-preemptive code segment,
>for object allocated from vmalloc(), the following scenarios
>may occur:
>
>        CPU 0
>tasks context
>   spin_lock(&vmap_area_lock)
>       Interrupt context
>           call_rcu()
>             mem_dump_obj
>               vmalloc_dump_obj
>                 spin_lock(&vmap_area_lock) <--deadlock
>
>and for PREEMPT-RT kernel, the spinlock will convert to sleepable
>lock, so the vmap_area_lock spinlock not allowed to get in non-preemptive
>code segment. therefore, this commit make the vmalloc_dump_obj() call in
>a clean context.
>
>Signed-off-by: Zqiang <qiang1.zhang@...el.com>
>---
>v1->v2:
> add IS_ENABLED(CONFIG_PREEMPT_RT) check.
> v2->v3:
> change commit message and add some comment.
>
> mm/util.c    |  4 +++-
> mm/vmalloc.c | 25 +++++++++++++++++++++++++
> 2 files changed, 28 insertions(+), 1 deletion(-)
>
>diff --git a/mm/util.c b/mm/util.c
>index 12984e76767e..2b0222a728cc 100644
>--- a/mm/util.c
>+++ b/mm/util.c
>@@ -1128,7 +1128,9 @@ void mem_dump_obj(void *object)
> 		return;
> 
> 	if (virt_addr_valid(object))
>-		type = "non-slab/vmalloc memory";
>+		type = "non-slab memory";
>+	else if (is_vmalloc_addr(object))
>+		type = "vmalloc memory";
> 	else if (object == NULL)
> 		type = "NULL pointer";
> 	else if (object == ZERO_SIZE_PTR)
>diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>index ccaa461998f3..4351eafbe7ab 100644
>--- a/mm/vmalloc.c
>+++ b/mm/vmalloc.c
>@@ -4034,6 +4034,31 @@ bool vmalloc_dump_obj(void *object)
> 	struct vm_struct *vm;
> 	void *objp = (void *)PAGE_ALIGN((unsigned long)object);
> 
>+	/* for non-vmalloc addr, return directly */
>+	if (!is_vmalloc_addr(objp))
>+		return false;
>+
>+	/**
>+	 * for non-Preempt-RT kernel, return directly. otherwise not
>+	 * only needs to determine whether it is in the interrupt context
>+	 * (in_interrupt())to avoid deadlock, but also to avoid acquire
>+	 * vmap_area_lock spinlock in disables interrupts or preempts
>+	 * critical sections, because the vmap_area_lock spinlock convert
>+	 * to sleepable lock
>+	 */
>+	if (IS_ENABLED(CONFIG_PREEMPT_RT) && !preemptible())
>+		return false;
>+
>+	/**
>+	 * get here, for Preempt-RT kernel, it means that we are in
>+	 * preemptible context(preemptible() is true), it also means
>+	 * that the in_interrupt() will return false.
>+	 * for non-Preempt-RT kernel, only needs to determine whether
>+	 * it is in the interrupt context(in_interrupt()) to avoid deadlock
>+	 */
>+	if (in_interrupt())
>+		return false;
>+
> 	vm = find_vm_area(objp);
> 	if (!vm)
> 		return false;
>-- 
>2.25.1

Powered by blists - more mailing lists