lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260201113446.4328-1-yuhaocheng035@gmail.com>
Date: Sun,  1 Feb 2026 19:34:36 +0800
From: Haocheng Yu <yuhaocheng035@...il.com>
To: acme@...nel.org
Cc: security@...nel.org,
	linux-kernel@...r.kernel.org,
	linux-perf-users@...r.kernel.org,
	gregkh@...uxfoundation.org
Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap

The issue is caused by a race condition between mmap() and event
teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
map_range() after the mmap_mutex is released. If another thread
closes the event or detaches the buffer during this window, the
reference count of rb can drop to zero, leading to a UAF or
refcount saturation when map_range() or subsequent logic attempts
to use it.

Fix this by extending the scope of mmap_mutex to cover the entire
setup process, including map_range(), ensuring the buffer remains
valid until the mapping is complete.

Signed-off-by: Haocheng Yu <yuhaocheng035@...il.com>
---
 kernel/events/core.c | 42 +++++++++++++++++++++---------------------
 1 file changed, 21 insertions(+), 21 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2c35acc2722b..7c93f7d057cb 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
 			ret = perf_mmap_aux(vma, event, nr_pages);
 		if (ret)
 			return ret;
-	}
-
-	/*
-	 * Since pinned accounting is per vm we cannot allow fork() to copy our
-	 * vma.
-	 */
-	vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
-	vma->vm_ops = &perf_mmap_vmops;
 
-	mapped = get_mapped(event, event_mapped);
-	if (mapped)
-		mapped(event, vma->vm_mm);
-
-	/*
-	 * Try to map it into the page table. On fail, invoke
-	 * perf_mmap_close() to undo the above, as the callsite expects
-	 * full cleanup in this case and therefore does not invoke
-	 * vmops::close().
-	 */
-	ret = map_range(event->rb, vma);
-	if (ret)
-		perf_mmap_close(vma);
+	/*
+	 * Since pinned accounting is per vm we cannot allow fork() to copy our
+	 * vma.
+	 */
+	vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+	vma->vm_ops = &perf_mmap_vmops;
+
+	mapped = get_mapped(event, event_mapped);
+	if (mapped)
+		mapped(event, vma->vm_mm);
+
+	/*
+	 * Try to map it into the page table. On fail, invoke
+	 * perf_mmap_close() to undo the above, as the callsite expects
+	 * full cleanup in this case and therefore does not invoke
+	 * vmops::close().
+	 */
+	ret = map_range(event->rb, vma);
+	if (ret)
+		perf_mmap_close(vma);
+	}
 
 	return ret;
 }
-- 
2.51.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ