lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <172563367463.2215.5542972042769938731.tip-bot2@tip-bot2> Date: Fri, 06 Sep 2024 14:41:14 -0000 From: "tip-bot2 for Peter Zijlstra" <tip-bot2@...utronix.de> To: linux-tip-commits@...r.kernel.org Cc: "Darrick J. Wong" <djwong@...nel.org>, Thomas Gleixner <tglx@...utronix.de>, "Peter Zijlstra (Intel)" <peterz@...radead.org>, x86@...nel.org, linux-kernel@...r.kernel.org Subject: [tip: locking/urgent] jump_label: Fix static_key_slow_dec() yet again The following commit has been merged into the locking/urgent branch of tip: Commit-ID: de752774f38bb766941ed1bf910ba5a9f6cc6bf7 Gitweb: https://git.kernel.org/tip/de752774f38bb766941ed1bf910ba5a9f6cc6bf7 Author: Peter Zijlstra <peterz@...radead.org> AuthorDate: Wed, 07 Aug 2024 16:03:12 +02:00 Committer: Peter Zijlstra <peterz@...radead.org> CommitterDate: Fri, 06 Sep 2024 16:29:22 +02:00 jump_label: Fix static_key_slow_dec() yet again While commit 83ab38ef0a0b ("jump_label: Fix concurrency issues in static_key_slow_dec()") fixed one problem, it created yet another, notably the following is now possible: slow_dec if (try_dec) // dec_not_one-ish, false // enabled == 1 slow_inc if (inc_not_disabled) // inc_not_zero-ish // enabled == 2 return guard((mutex)(&jump_label_mutex); if (atomic_cmpxchg(1,0)==1) // false, we're 2 slow_dec if (try-dec) // dec_not_one, true // enabled == 1 return else try_dec() // dec_not_one, false WARN Close this by creating two distinct operations, one dec_not_one()-like for the fast path and one dec_and_test()-like for the slow path. Both also taking the magic -1 value into account. Thomas provided the more readable version with comments on. Fixes: 83ab38ef0a0b ("jump_label: Fix concurrency issues in static_key_slow_dec()") Reported-by: "Darrick J. Wong" <djwong@...nel.org> Co-developed-by: Thomas Gleixner <tglx@...utronix.de> Signed-off-by: Thomas Gleixner <tglx@...utronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org> Link: https://lkml.kernel.org/r/875xsc4ehr.ffs@tglx --- kernel/jump_label.c | 83 ++++++++++++++++++++++++++++---------------- 1 file changed, 54 insertions(+), 29 deletions(-) diff --git a/kernel/jump_label.c b/kernel/jump_label.c index 6dc76b5..0881fd2 100644 --- a/kernel/jump_label.c +++ b/kernel/jump_label.c @@ -168,8 +168,8 @@ bool static_key_slow_inc_cpuslocked(struct static_key *key) jump_label_update(key); /* * Ensure that when static_key_fast_inc_not_disabled() or - * static_key_slow_try_dec() observe the positive value, - * they must also observe all the text changes. + * static_key_dec() observe the positive value, they must also + * observe all the text changes. */ atomic_set_release(&key->enabled, 1); } else { @@ -250,49 +250,74 @@ void static_key_disable(struct static_key *key) } EXPORT_SYMBOL_GPL(static_key_disable); -static bool static_key_slow_try_dec(struct static_key *key) +static bool static_key_dec(struct static_key *key, bool dec_not_one) { - int v; + int v = atomic_read(&key->enabled); - /* - * Go into the slow path if key::enabled is less than or equal than - * one. One is valid to shut down the key, anything less than one - * is an imbalance, which is handled at the call site. - * - * That includes the special case of '-1' which is set in - * static_key_slow_inc_cpuslocked(), but that's harmless as it is - * fully serialized in the slow path below. By the time this task - * acquires the jump label lock the value is back to one and the - * retry under the lock must succeed. - */ - v = atomic_read(&key->enabled); do { /* - * Warn about the '-1' case though; since that means a - * decrement is concurrent with a first (0->1) increment. IOW - * people are trying to disable something that wasn't yet fully - * enabled. This suggests an ordering problem on the user side. + * Warn about the '-1' case; since that means a decrement is + * concurrent with a first (0->1) increment. IOW people are + * trying to disable something that wasn't yet fully enabled. + * This suggests an ordering problem on the user side. + * + * Warn about the '0' case; simple underflow. */ - WARN_ON_ONCE(v < 0); - if (v <= 1) - return false; + if (WARN_ON_ONCE(v <= 0)) + return v; + + if (dec_not_one && v == 1) + return v; + } while (!likely(atomic_try_cmpxchg(&key->enabled, &v, v - 1))); - return true; + return v; +} + +/* + * Fastpath: Decrement if the reference count is greater than one + * + * Returns false, if the reference count is 1 or -1 to force the caller + * into the slowpath. + * + * The -1 case is to handle a decrement during a concurrent first enable, + * which sets the count to -1 in static_key_slow_inc_cpuslocked(). As the + * slow path is serialized the caller will observe 1 once it acquired the + * jump_label_mutex, so the slow path can succeed. + * + * Notably 0 (underflow) returns success such that it bails without doing + * anything. + */ +static bool static_key_dec_not_one(struct static_key *key) +{ + int v = static_key_dec(key, true); + + return v != 1 && v != -1; +} + +/* + * Slowpath: Decrement and test whether the refcount hit 0. + * + * Returns true if the refcount hit zero, i.e. the previous value was one. + */ +static bool static_key_dec_and_test(struct static_key *key) +{ + int v = static_key_dec(key, false); + + lockdep_assert_held(&jump_label_mutex); + return v == 1; } static void __static_key_slow_dec_cpuslocked(struct static_key *key) { lockdep_assert_cpus_held(); - if (static_key_slow_try_dec(key)) + if (static_key_dec_not_one(key)) return; guard(mutex)(&jump_label_mutex); - if (atomic_cmpxchg(&key->enabled, 1, 0) == 1) + if (static_key_dec_and_test(key)) jump_label_update(key); - else - WARN_ON_ONCE(!static_key_slow_try_dec(key)); } static void __static_key_slow_dec(struct static_key *key) @@ -329,7 +354,7 @@ void __static_key_slow_dec_deferred(struct static_key *key, { STATIC_KEY_CHECK_USE(key); - if (static_key_slow_try_dec(key)) + if (static_key_dec_not_one(key)) return; schedule_delayed_work(work, timeout);
Powered by blists - more mailing lists