lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 10 Dec 2022 18:00:48 -0500
From:   Waiman Long <longman@...hat.com>
To:     Catalin Marinas <catalin.marinas@....com>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Muchun Song <songmuchun@...edance.com>,
        Waiman Long <longman@...hat.com>
Subject: [PATCH 2/2] mm/kmemleak: Fix UAF bug in kmemleak_scan()

Commit 6edda04ccc7c ("mm/kmemleak: prevent soft lockup in first
object iteration loop of kmemleak_scan()") fixes soft lockup problem
in kmemleak_scan() by periodically doing a cond_resched(). It does
take a reference of the current object before doing it. Unfortunately,
if the object has been deleted from the object_list, the next object
pointed to by its next pointer may no longer be valid after coming
back from cond_resched(). This can result in use-after-free and other
nasty problem.

Fix this problem by restarting the object scan from the beginning of
the object_list in case the object has been de-allocated after returning
from cond_resched().

Fixes: 6edda04ccc7c ("mm/kmemleak: prevent soft lockup in first object iteration loop of kmemleak_scan()")
Signed-off-by: Waiman Long <longman@...hat.com>
---
 mm/kmemleak.c | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 8c44f70ed457..d3a8fa4e3af3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -1465,15 +1465,26 @@ static void scan_gray_list(void)
  * that the given object won't go away without RCU read lock by performing a
  * get_object() if necessaary.
  */
-static void kmemleak_cond_resched(struct kmemleak_object *object)
+static void kmemleak_cond_resched(struct kmemleak_object **pobject)
 {
-	if (!get_object(object))
+	struct kmemleak_object *obj = *pobject;
+
+	if (!(obj->flags & OBJECT_ALLOCATED) || !get_object(obj))
 		return;	/* Try next object */
 
 	rcu_read_unlock();
 	cond_resched();
 	rcu_read_lock();
-	put_object(object);
+	put_object(obj);
+
+	/*
+	 * In the unlikely event that the object had been de-allocated, we
+	 * have to restart the scanning from the beginning of the object_list
+	 * as the object pointed to by the next pointer may have been freed.
+	 */
+	if (unlikely(!(obj->flags & OBJECT_ALLOCATED)))
+		*pobject = list_entry_rcu(object_list.next,
+					  typeof(*obj), object_list);
 }
 
 /*
@@ -1524,7 +1535,7 @@ static void kmemleak_scan(void)
 		raw_spin_unlock_irq(&object->lock);
 
 		if (need_resched())
-			kmemleak_cond_resched(object);
+			kmemleak_cond_resched(&object);
 	}
 	rcu_read_unlock();
 
@@ -1593,7 +1604,7 @@ static void kmemleak_scan(void)
 	rcu_read_lock();
 	list_for_each_entry_rcu(object, &object_list, object_list) {
 		if (need_resched())
-			kmemleak_cond_resched(object);
+			kmemleak_cond_resched(&object);
 
 		/*
 		 * This is racy but we can save the overhead of lock/unlock
@@ -1630,7 +1641,7 @@ static void kmemleak_scan(void)
 	rcu_read_lock();
 	list_for_each_entry_rcu(object, &object_list, object_list) {
 		if (need_resched())
-			kmemleak_cond_resched(object);
+			kmemleak_cond_resched(&object);
 
 		/*
 		 * This is racy but we can save the overhead of lock/unlock
-- 
2.31.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ