lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250506102402.88141-1-aha310510@gmail.com>
Date: Tue,  6 May 2025 19:24:02 +0900
From: Jeongjun Park <aha310510@...il.com>
To: dennis@...nel.org,
	tj@...nel.org,
	cl@...ux.com,
	akpm@...ux-foundation.org
Cc: jack@...e.cz,
	hughd@...gle.com,
	linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	Jeongjun Park <aha310510@...il.com>
Subject: [PATCH] lib/percpu_counter: fix data race in __percpu_counter_limited_add()

The following data-race was found in __percpu_counter_limited_add():

==================================================================
BUG: KCSAN: data-race in __percpu_counter_limited_add / __percpu_counter_limited_add

write to 0xffff88801f417e50 of 8 bytes by task 6663 on cpu 0:
 __percpu_counter_limited_add+0x388/0x4a0 lib/percpu_counter.c:386
 percpu_counter_limited_add include/linux/percpu_counter.h:77 [inline]
 shmem_inode_acct_blocks+0x10e/0x230 mm/shmem.c:233
 shmem_alloc_and_add_folio mm/shmem.c:1923 [inline]
 shmem_get_folio_gfp.constprop.0+0x87f/0xc90 mm/shmem.c:2533
 shmem_get_folio mm/shmem.c:2639 [inline]
 ....

read to 0xffff88801f417e50 of 8 bytes by task 6659 on cpu 1:
 __percpu_counter_limited_add+0xc8/0x4a0 lib/percpu_counter.c:344
 percpu_counter_limited_add include/linux/percpu_counter.h:77 [inline]
 shmem_inode_acct_blocks+0x10e/0x230 mm/shmem.c:233
 shmem_alloc_and_add_folio mm/shmem.c:1923 [inline]
 shmem_get_folio_gfp.constprop.0+0x87f/0xc90 mm/shmem.c:2533
 shmem_get_folio mm/shmem.c:2639 [inline]
 ....

value changed: 0x000000000000396d -> 0x000000000000398e
==================================================================

__percpu_counter_limited_add() should protect fbc via raw_spin_lock(),
but it calls spinlock in the wrong place. This causes a data-race,
so we need to fix it to call raw_spin_lock() a bit earlier.

Fixes: beb986862844 ("shmem,percpu_counter: add _limited_add(fbc, limit, amount)")
Signed-off-by: Jeongjun Park <aha310510@...il.com>
---
 lib/percpu_counter.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index 2891f94a11c6..17f9fc12b409 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -336,6 +336,7 @@ bool __percpu_counter_limited_add(struct percpu_counter *fbc,
 		return true;
 
 	local_irq_save(flags);
+	raw_spin_lock(&fbc->lock);
 	unknown = batch * num_online_cpus();
 	count = __this_cpu_read(*fbc->counters);
 
@@ -344,11 +345,10 @@ bool __percpu_counter_limited_add(struct percpu_counter *fbc,
 	    ((amount > 0 && fbc->count + unknown <= limit) ||
 	     (amount < 0 && fbc->count - unknown >= limit))) {
 		this_cpu_add(*fbc->counters, amount);
-		local_irq_restore(flags);
-		return true;
+		good = true;
+		goto out;
 	}
 
-	raw_spin_lock(&fbc->lock);
 	count = fbc->count + amount;
 
 	/* Skip percpu_counter_sum() when safe */
--

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ