[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z4qBnFICnuhMuH2t@casper.infradead.org>
Date: Fri, 17 Jan 2025 16:13:16 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Suren Baghdasaryan <surenb@...gle.com>, will@...nel.org,
boqun.feng@...il.com, mark.rutland@....com,
Mateusz Guzik <mjguzik@...il.com>, akpm@...ux-foundation.org,
liam.howlett@...cle.com, lorenzo.stoakes@...cle.com,
david.laight.linux@...il.com, mhocko@...e.com, vbabka@...e.cz,
hannes@...xchg.org, oliver.sang@...el.com,
mgorman@...hsingularity.net, david@...hat.com, peterx@...hat.com,
oleg@...hat.com, dave@...olabs.net, paulmck@...nel.org,
brauner@...nel.org, dhowells@...hat.com, hdanton@...a.com,
hughd@...gle.com, lokeshgidra@...gle.com, minchan@...gle.com,
jannh@...gle.com, shakeel.butt@...ux.dev, souravpanda@...gle.com,
pasha.tatashin@...een.com, klarasmodin@...il.com,
richard.weiyang@...il.com, corbet@....net,
linux-doc@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH] refcount: Strengthen inc_not_zero()
On Wed, Jan 15, 2025 at 05:00:11PM +0100, Peter Zijlstra wrote:
> Subject: refcount: Strengthen inc_not_zero()
>
> For speculative lookups where a successful inc_not_zero() pins the
> object, but where we still need to double check if the object acquired
> is indeed the one we set out to aquire, needs this validation to happen
> *after* the increment.
>
> Notably SLAB_TYPESAFE_BY_RCU is one such an example.
While you're looking at inc_not_zero(), have you already thought about
doing something like this? ie failing rather than saturating since
all users of this already have to check for failure. It looks like
two comparisons per call rather than three.
diff --git a/include/linux/refcount.h b/include/linux/refcount.h
index 35f039ecb272..3ef7d316e870 100644
--- a/include/linux/refcount.h
+++ b/include/linux/refcount.h
@@ -142,16 +142,13 @@ bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp)
int old = refcount_read(r);
do {
- if (!old)
+ if (old <= 0 || old + i < 0)
break;
} while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i));
if (oldp)
*oldp = old;
- if (unlikely(old < 0 || old + i < 0))
- refcount_warn_saturate(r, REFCOUNT_ADD_NOT_ZERO_OVF);
-
return old;
}
$ ./scripts/bloat-o-meter before.o after.o
add/remove: 0/0 grow/shrink: 0/4 up/down: 0/-91 (-91)
Function old new delta
io_wq_for_each_worker.isra 162 158 -4
io_worker_handle_work 1403 1387 -16
io_wq_activate_free_worker 187 158 -29
io_queue_worker_create 367 325 -42
Total: Before=10068, After=9977, chg -0.90%
(that's io_uring/io-wq.o as an example user of refcount_inc_not_zero())
Powered by blists - more mailing lists