[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250111063152.1638-1-hdanton@sina.com>
Date: Sat, 11 Jan 2025 14:31:49 +0800
From: Hillf Danton <hdanton@...a.com>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: akpm@...ux-foundation.org,
peterz@...radead.org,
willy@...radead.org,
hannes@...xchg.org,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
kernel-team@...roid.com
Subject: Re: [PATCH v9 10/17] refcount: introduce __refcount_{add|inc}_not_zero_limited
On Fri, 10 Jan 2025 20:25:57 -0800 Suren Baghdasaryan <surenb@...gle.com>
> -bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp)
> +bool __refcount_add_not_zero_limited(int i, refcount_t *r, int *oldp,
> + int limit)
> {
> int old = refcount_read(r);
>
> do {
> if (!old)
> break;
> +
> + if (statically_true(limit == INT_MAX))
> + continue;
> +
> + if (i > limit - old) {
> + if (oldp)
> + *oldp = old;
> + return false;
> + }
> } while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i));
The acquire version should be used, see atomic_long_try_cmpxchg_acquire()
in kernel/locking/rwsem.c.
Why not use the atomic_long_t without bothering to add this limited version?
Powered by blists - more mailing lists