lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 20 Mar 2019 17:14:51 +0100 From: Peter Zijlstra <peterz@...radead.org> To: Jakub Kicinski <jakub.kicinski@...ronome.com> Cc: pbonzini@...hat.com, ard.biesheuvel@...aro.org, tglx@...utronix.de, mingo@...nel.org, linux-kernel@...r.kernel.org, oss-drivers@...ronome.com Subject: Re: [PATCH] locking/static_key: Fix false positive warnings on concurrent dec/inc On Tue, Mar 19, 2019 at 10:46:57AM -0700, Jakub Kicinski wrote: > That indeed looks far cleanest, thanks! > > Tested-by: Jakub Kicinski <jakub.kicinski@...ronome.com> Thanks, I've made it into the below patch. --- Subject: locking/static_key: Fix false positive warnings on concurrent dec/inc From: Peter Zijlstra <peterz@...radead.org> Date: Tue, 19 Mar 2019 13:18:56 +0100 Even though the atomic_dec_and_mutex_lock() in __static_key_slow_dec_cpuslocked() can never see a negative value in key->enabled the subsequent sanity check is re-reading key->enabled, which may have been set to -1 in the meantime by static_key_slow_inc_cpuslocked(). CPU A CPU B __static_key_slow_dec_cpuslocked(): static_key_slow_inc_cpuslocked(): # enabled = 1 atomic_dec_and_mutex_lock() # enabled = 0 atomic_read() == 0 atomic_set(-1) # enabled = -1 val = atomic_read() # Oops - val == -1! The test case is TCP's clean_acked_data_enable() / clean_acked_data_disable() as tickled by ktls (net/ktls). Cc: mingo@...nel.org Cc: oss-drivers@...ronome.com Cc: ard.biesheuvel@...aro.org Cc: tglx@...utronix.de Cc: pbonzini@...hat.com Reported-by: Jakub Kicinski <jakub.kicinski@...ronome.com> Suggested-by: Jakub Kicinski <jakub.kicinski@...ronome.com> Tested-by: Jakub Kicinski <jakub.kicinski@...ronome.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org> --- kernel/jump_label.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) --- a/kernel/jump_label.c +++ b/kernel/jump_label.c @@ -206,6 +206,8 @@ static void __static_key_slow_dec_cpuslo unsigned long rate_limit, struct delayed_work *work) { + int val; + lockdep_assert_cpus_held(); /* @@ -215,17 +217,20 @@ static void __static_key_slow_dec_cpuslo * returns is unbalanced, because all other static_key_slow_inc() * instances block while the update is in progress. */ - if (!atomic_dec_and_mutex_lock(&key->enabled, &jump_label_mutex)) { - WARN(atomic_read(&key->enabled) < 0, - "jump label: negative count!\n"); + val = atomic_fetch_add_unless(&key->enabled, -1, 1); + if (val != 1) { + WARN(val < 0, "jump label: negative count!\n"); return; } - if (rate_limit) { - atomic_inc(&key->enabled); - schedule_delayed_work(work, rate_limit); - } else { - jump_label_update(key); + jump_label_lock(); + if (atomic_dec_and_test(&key->enabled)) { + if (rate_limit) { + atomic_inc(&key->enabled); + schedule_delayed_work(work, rate_limit); + } else { + jump_label_update(key); + } } jump_label_unlock(); }
Powered by blists - more mailing lists