lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9be2ca36-0932-4237-aa0b-dd30161afe90@h-partners.com>
Date: Mon, 22 Dec 2025 15:42:34 +0300
From: Gladyshev Ilya <gladyshev.ilya1@...artners.com>
To: Gregory Price <gourry@...rry.net>
CC: <patchwork@...wei.com>, <guohanjun@...wei.com>,
	<wangkefeng.wang@...wei.com>, <weiyongjun1@...wei.com>,
	<yusongping@...wei.com>, <leijitang@...wei.com>, <artem.kuzin@...wei.com>,
	<stepanov.anatoly@...wei.com>, <alexander.grubnikov@...wei.com>,
	<gorbunov.ivan@...artners.com>, <akpm@...ux-foundation.org>,
	<david@...nel.org>, <lorenzo.stoakes@...cle.com>, <Liam.Howlett@...cle.com>,
	<vbabka@...e.cz>, <rppt@...nel.org>, <surenb@...gle.com>, <mhocko@...e.com>,
	<ziy@...dia.com>, <harry.yoo@...cle.com>, <willy@...radead.org>,
	<yuzhao@...gle.com>, <baolin.wang@...ux.alibaba.com>,
	<muchun.song@...ux.dev>, <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 2/2] mm: implement page refcount locking via dedicated
 bit

On 12/19/2025 9:17 PM, Gregory Price wrote:
> On Fri, Dec 19, 2025 at 12:46:39PM +0000, Gladyshev Ilya wrote:
>> The current atomic-based page refcount implementation treats zero
>> counter as dead and requires a compare-and-swap loop in folio_try_get()
>> to prevent incrementing a dead refcount. This CAS loop acts as a
>> serialization point and can become a significant bottleneck during
>> high-frequency file read operations.
>>
>> This patch introduces FOLIO_LOCKED_BIT to distinguish between a
>> (temporary) zero refcount and a locked (dead/frozen) state. Because now
>> incrementing counter doesn't affect it's locked/unlocked state, it is
>> possible to use an optimistic atomic_fetch_add() in
>> page_ref_add_unless_zero() that operates independently of the locked bit.
>> The locked state is handled after the increment attempt, eliminating the
>> need for the CAS loop.
>>
> 
> Such a fundamental change needs additional validation to show there's no
> obvious failures.  Have you run this through a model checker to verify
> the only failure condition is the 2^31 overflow condition you describe?
Aside from extensive logical reasoning, I validated some racy situations 
via tools/memory-model model checking:
1. Increment vs. free race (bad output: use-after-free | memory leak)
2. Free vs. free race (bad output: double free | memory leak)
3. Increment vs. freeze (bad output: both fails)
4. Increment vs. unfreeze (bad output: missed increment)

If there are other scenarios you are concerned about, I will model them 
as well. You can find the litmus tests at the end of this email.
> A single benchmark and a short changelog is leaves me very uneasy about
> such a change.
This RFC submission was primarily focused on demonstrating the concept 
and the performance gain for the reported bottleneck. I will improve the 
changelog (and safety reasoning) for later submissions, as well as the 
benchmarking side.

---

Note: I used 32 as locked bit in model tests for better readability. It 
doesn't affect anything

---

diff --git 
a/tools/memory-model/litmus-tests/folio_refcount/free_free_race.litmus 
b/tools/memory-model/litmus-tests/folio_refcount/free_free_race.litmus
new file mode 100644
index 000000000000..4dc7e899245b
--- /dev/null
+++ b/tools/memory-model/litmus-tests/folio_refcount/free_free_race.litmus
@@ -0,0 +1,37 @@
+C free_vs_free_race
+
+(* Result: Never
+ *
+ * Both P0 and P1 tries to decrement refcount.
+ *
+ * Expected result: only one deallocation (r0 xor r1 == 1)
+ * which is equal to r0 != r1 => bad result is r0 == r1
+*)
+
+{
+       int refcount = 2;
+}
+
+P0(int *refcount)
+{
+       int r0;
+
+       r0 = atomic_dec_and_test(refcount);
+       if (r0) {
+               r0 = atomic_cmpxchg_relaxed(refcount, 0, 32) == 0;
+       }
+}
+
+
+P1(int *refcount)
+{
+       int r1;
+
+       r1 = atomic_dec_and_test(refcount);
+       if (r1) {
+               r1 = atomic_cmpxchg_relaxed(refcount, 0, 32) == 0;
+       }
+}
+
+exists (0:r0 == 1:r1)
+
diff --git 
a/tools/memory-model/litmus-tests/folio_refcount/inc_free_race.litmus 
b/tools/memory-model/litmus-tests/folio_refcount/inc_free_race.litmus
new file mode 100644
index 000000000000..863abba48415
--- /dev/null
+++ b/tools/memory-model/litmus-tests/folio_refcount/inc_free_race.litmus
@@ -0,0 +1,34 @@
+C inc_free_race
+
+(* Result: Never
+ *
+ * P0 tries to decrement free object.
+ * P1 tries to acquire it.
+ * Expected result: one of them failes (r0 xor r1 == 1),
+ *   so bad result is r0 == r1
+*)
+
+{
+       int refcount = 1;
+}
+
+P0(int *refcount)
+{
+       int r0;
+
+       r0 = atomic_dec_and_test(refcount);
+       if (r0) {
+               r0 = atomic_cmpxchg_relaxed(refcount, 0, 32) == 0;
+       }
+}
+
+
+P1(int *refcount)
+{
+       int r1;
+
+       r1 = atomic_add_return(1, refcount);
+       r1 = (r1 & (32)) == 0;
+}
+
+exists (0:r0 == 1:r1)
diff --git 
a/tools/memory-model/litmus-tests/folio_refcount/inc_freeze_race.litmus 
b/tools/memory-model/litmus-tests/folio_refcount/inc_freeze_race.litmus
new file mode 100644
index 000000000000..6e3a4112080c
--- /dev/null
+++ b/tools/memory-model/litmus-tests/folio_refcount/inc_freeze_race.litmus
@@ -0,0 +1,31 @@
+C inc_freeze_race
+
+(* Result: Never
+ *
+ * P0 tries to freeze counter with value 3 (can be arbitary).
+ * P1 tries to acquire reference.
+ * Expected result: one of them failes (r0 xor r1 == 1),
+ *   so bad result is r0 == r1 (= 0, 1).
+*)
+
+{
+       int refcount = 3;
+}
+
+P0(int *refcount)
+{
+       int r0;
+
+       r0 = atomic_cmpxchg(refcount, 3, 32);
+}
+
+
+P1(int *refcount)
+{
+       int r0;
+
+       r0 = atomic_add_return(1, refcount);
+       r0 = (r0 & (32)) == 0;
+}
+
+exists (0:r0 == 1:r0)
diff --git 
a/tools/memory-model/litmus-tests/folio_refcount/inc_unfreeze_race.litmus 
b/tools/memory-model/litmus-tests/folio_refcount/inc_unfreeze_race.litmus
new file mode 100644
index 000000000000..f7e2273fe7da
--- /dev/null
+++ 
b/tools/memory-model/litmus-tests/folio_refcount/inc_unfreeze_race.litmus
@@ -0,0 +1,30 @@
+C inc_unfreeze_race
+
+(* Result: Never
+ *
+ * P0 tries to unfreeze refcount with saved value 3
+ * P1 tries to acquire reference.
+ *
+ * Expected result: P1 fails or in the end refcount is 4
+ * Bad result: Missed refcount
+*)
+
+{
+       int refcount = 32;
+}
+
+P0(int *refcount)
+{
+       smp_store_release(refcount, 3);
+}
+
+
+P1(int *refcount)
+{
+       int r0;
+
+       r0 = atomic_add_return(1, refcount);
+       r0 = (r0 & (32)) == 0;
+}
+
+exists (1:r0=1 /\ refcount != 4)


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ