lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <68d4dfde.050a0220.3a612a.0004.GAE@google.com>
Date: Wed, 24 Sep 2025 23:23:26 -0700
From: syzbot <syzbot+62edf7e27b2e8f754525@...kaller.appspotmail.com>
To: linux-kernel@...r.kernel.org, syzkaller-bugs@...glegroups.com
Subject: Forwarded: [PATCH] hugetlbfs: fix lock imbalance in hugetlb_vmdelete_list

For archival purposes, forwarding an incoming command email to
linux-kernel@...r.kernel.org, syzkaller-bugs@...glegroups.com.

***

Subject: [PATCH] hugetlbfs: fix lock imbalance in hugetlb_vmdelete_list
Author: kartikey406@...il.com

#syz test: git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master

hugetlb_vmdelete_list() has a lock imbalance bug where lock acquisition
and release evaluate VMA conditions at different times, potentially
causing unlock to be called on the wrong lock or skipped entirely.

The current code evaluates __vma_shareable_lock() and __vma_private_lock()
twice - once during hugetlb_vma_trylock_write() and again during
hugetlb_vma_unlock_write(). If VMA state changes between these calls
(due to unmap operations or concurrent access), the lock and unlock
paths may diverge, leading to:

1. Unlocking a lock that was never acquired
2. Unlocking the wrong lock type
3. Leaving a lock held

This manifests as "bad unlock balance detected" warnings:

  WARNING: bad unlock balance detected!
  trying to release lock (&vma_lock->rw_sema) at:
  hugetlb_vmdelete_list+0x179/0x1c0 fs/hugetlbfs/inode.c:501
  but there are no more locks to release!

Fix this by saving the lock type and pointer when acquiring the lock,
then using the saved information for unlock, ensuring symmetric lock
operations regardless of any VMA state changes.

Reported-by: syzbot+62edf7e27b2e8f754525@...kaller.appspotmail.com
Link: https://syzkaller.appspot.com/bug?extid=62edf7e27b2e8f754525
Signed-off-by: Deepanshu Kartikey <kartikey406@...il.com>
---
 fs/hugetlbfs/inode.c | 32 +++++++++++++++++++++++++++++---
 1 file changed, 29 insertions(+), 3 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 9e0625167517..2721ba2ee3f3 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -42,6 +42,10 @@
 #define CREATE_TRACE_POINTS
 #include <trace/events/hugetlbfs.h>
 
+#define HPAGE_RESV_OWNER    (1UL << 0)
+#define HPAGE_RESV_UNMAPPED (1UL << 1)
+#define HPAGE_RESV_MASK (HPAGE_RESV_OWNER | HPAGE_RESV_UNMAPPED)
+
 static const struct address_space_operations hugetlbfs_aops;
 static const struct file_operations hugetlbfs_file_operations;
 static const struct inode_operations hugetlbfs_dir_inode_operations;
@@ -475,6 +479,9 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end,
 		      zap_flags_t zap_flags)
 {
 	struct vm_area_struct *vma;
+	struct hugetlb_vma_lock *vma_lock;
+	struct resv_map *resv_map;
+	bool locked;
 
 	/*
 	 * end == 0 indicates that the entire range after start should be
@@ -484,8 +491,24 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end,
 	vma_interval_tree_foreach(vma, root, start, end ? end - 1 : ULONG_MAX) {
 		unsigned long v_start;
 		unsigned long v_end;
-
-		if (!hugetlb_vma_trylock_write(vma))
+		vma_lock = NULL;
+		resv_map = NULL;
+		locked = false;
+
+		if (__vma_shareable_lock(vma)) {
+			vma_lock = vma->vm_private_data;
+			if (vma_lock && down_write_trylock(&vma_lock->rw_sema))
+				locked = true;
+		} else if (__vma_private_lock(vma)) {
+			resv_map = (struct resv_map *)((unsigned long)vma->vm_private_data & ~HPAGE_RESV_MASK);
+			if (resv_map && down_write_trylock(&resv_map->rw_sema))
+				locked = true;
+		} else {
+			/* No lock needed for this VMA */
+			locked = true;
+		}
+
+		if (!locked)
 			continue;
 
 		v_start = vma_offset_start(vma, start);
@@ -498,7 +521,10 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end,
 		 * vmas.  Therefore, lock is not held when calling
 		 * unmap_hugepage_range for private vmas.
 		 */
-		hugetlb_vma_unlock_write(vma);
+		if (vma_lock)
+			up_write(&vma_lock->rw_sema);
+		else if (resv_map)
+			up_write(&resv_map->rw_sema);
 	}
 }
 
-- 
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ