[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250203150525.456525-3-bigeasy@linutronix.de>
Date: Mon, 3 Feb 2025 16:05:23 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: linux-kernel@...r.kernel.org,
rcu@...r.kernel.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
MengEn Sun <mengensun@...cent.com>,
Thomas Gleixner <tglx@...utronix.de>,
YueHong Wu <yuehongwu@...cent.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>,
Josh Triplett <josh@...htriplett.org>,
Boqun Feng <boqun.feng@...il.com>,
Uladzislau Rezki <urezki@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
Zqiang <qiang.zhang1211@...il.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: [PATCH 2/4] ucount: Replace get_ucounts_or_wrap() with atomic_inc_not_zero().
get_ucounts_or_wrap() increments the counter and if the counter is
negative then it decrements it again in order to reset the previous increment.
This statement can be replaced with atomic_inc_not_zero() to only
increment the counter if it is not yet 0.
This simplifies the get function because the put (if the get failed) can
be removed. atomic_inc_not_zero() is implement as a cmpxchg() loop which
can be repeated several times if another get/put is performed in parallel. This
will be optimized later.
Increment the reference counter only if not yet dropped to zero.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
---
kernel/ucount.c | 24 ++++++------------------
1 file changed, 6 insertions(+), 18 deletions(-)
diff --git a/kernel/ucount.c b/kernel/ucount.c
index 86c5f1c0bad90..4aa5011538257 100644
--- a/kernel/ucount.c
+++ b/kernel/ucount.c
@@ -146,25 +146,16 @@ static void hlist_add_ucounts(struct ucounts *ucounts)
spin_unlock_irq(&ucounts_lock);
}
-static inline bool get_ucounts_or_wrap(struct ucounts *ucounts)
-{
- /* Returns true on a successful get, false if the count wraps. */
- return !atomic_add_negative(1, &ucounts->count);
-}
-
struct ucounts *get_ucounts(struct ucounts *ucounts)
{
- if (!get_ucounts_or_wrap(ucounts)) {
- put_ucounts(ucounts);
- ucounts = NULL;
- }
- return ucounts;
+ if (atomic_inc_not_zero(&ucounts->count))
+ return ucounts;
+ return NULL;
}
struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid)
{
struct hlist_head *hashent = ucounts_hashentry(ns, uid);
- bool wrapped;
struct ucounts *ucounts, *new = NULL;
spin_lock_irq(&ucounts_lock);
@@ -189,14 +180,11 @@ struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid)
return new;
}
}
-
- wrapped = !get_ucounts_or_wrap(ucounts);
+ if (!atomic_inc_not_zero(&ucounts->count))
+ ucounts = NULL;
spin_unlock_irq(&ucounts_lock);
kfree(new);
- if (wrapped) {
- put_ucounts(ucounts);
- return NULL;
- }
+
return ucounts;
}
--
2.47.2
Powered by blists - more mailing lists