lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221019140850.3395-1-ubizjak@gmail.com>
Date:   Wed, 19 Oct 2022 16:08:50 +0200
From:   Uros Bizjak <ubizjak@...il.com>
To:     linux-kernel@...r.kernel.org
Cc:     Uros Bizjak <ubizjak@...il.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Josh Poimboeuf <jpoimboe@...nel.org>,
        Jason Baron <jbaron@...mai.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ard Biesheuvel <ardb@...nel.org>
Subject: [PATCH] jump_label: use atomic_try_cmpxchg in static_key_slow_inc_cpuslocked

Use atomic_try_cmpxchg instead of atomic_cmpxchg (*ptr, old, new) == old
in static_key_slow_inc_cpuslocked.  x86 CMPXCHG instruction returns success
in ZF flag, so this change saves a compare after cmpxchg (and related move
instruction in front of cmpxchg).

Also, atomic_try_cmpxchg implicitly assigns old *ptr value to "old" when
cmpxchg fails, enabling further code simplifications.

No functional change intended.

Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Josh Poimboeuf <jpoimboe@...nel.org>
Cc: Jason Baron <jbaron@...mai.com>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Ard Biesheuvel <ardb@...nel.org>
Signed-off-by: Uros Bizjak <ubizjak@...il.com>
---
 kernel/jump_label.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index 714ac4c3b556..4d6c6f5f60db 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -115,8 +115,6 @@ EXPORT_SYMBOL_GPL(static_key_count);
 
 void static_key_slow_inc_cpuslocked(struct static_key *key)
 {
-	int v, v1;
-
 	STATIC_KEY_CHECK_USE(key);
 	lockdep_assert_cpus_held();
 
@@ -132,11 +130,9 @@ void static_key_slow_inc_cpuslocked(struct static_key *key)
 	 * so it counts as "enabled" in jump_label_update().  Note that
 	 * atomic_inc_unless_negative() checks >= 0, so roll our own.
 	 */
-	for (v = atomic_read(&key->enabled); v > 0; v = v1) {
-		v1 = atomic_cmpxchg(&key->enabled, v, v + 1);
-		if (likely(v1 == v))
+	for (int v = atomic_read(&key->enabled); v > 0; )
+		if (likely(atomic_try_cmpxchg(&key->enabled, &v, v + 1)))
 			return;
-	}
 
 	jump_label_lock();
 	if (atomic_read(&key->enabled) == 0) {
-- 
2.37.3

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ