[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y29vhZ7dWtrlIMAz@hirez.programming.kicks-ass.net>
Date: Sat, 12 Nov 2022 11:03:49 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Dmitry Safonov <dima@...sta.com>
Cc: linux-kernel@...r.kernel.org, David Ahern <dsahern@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
Bob Gilligan <gilligan@...sta.com>,
"David S. Miller" <davem@...emloft.net>,
Dmitry Safonov <0x7f454c46@...il.com>,
Francesco Ruggeri <fruggeri@...sta.com>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Salam Noureddine <noureddine@...sta.com>,
netdev@...r.kernel.org, Ard Biesheuvel <ardb@...nel.org>,
Jason Baron <jbaron@...mai.com>,
Josh Poimboeuf <jpoimboe@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH v3 1/3] jump_label: Prevent key->enabled int overflow
On Fri, Nov 11, 2022 at 09:23:18PM +0000, Dmitry Safonov wrote:
> 1. With CONFIG_JUMP_LABEL=n static_key_slow_inc() doesn't have any
> protection against key->enabled refcounter overflow.
> 2. With CONFIG_JUMP_LABEL=y static_key_slow_inc_cpuslocked()
> still may turn the refcounter negative as (v + 1) may overflow.
>
> key->enabled is indeed a ref-counter as it's documented in multiple
> places: top comment in jump_label.h, Documentation/staging/static-keys.rst,
> etc.
>
> As -1 is reserved for static key that's in process of being enabled,
> functions would break with negative key->enabled refcount:
> - for CONFIG_JUMP_LABEL=n negative return of static_key_count()
> breaks static_key_false(), static_key_true()
> - the ref counter may become 0 from negative side by too many
> static_key_slow_inc() calls and lead to use-after-free issues.
>
> These flaws result in that some users have to introduce an additional
> mutex and prevent the reference counter from overflowing themselves,
> see bpf_enable_runtime_stats() checking the counter against INT_MAX / 2.
Urgh,. nothing like working around defects instead of fixing them I
suppose :/
> Prevent the reference counter overflow by checking if (v + 1) > 0.
> Change functions API to return whether the increment was successful.
>
> While at here, provide static_key_fast_inc() helper that does ref
> counter increment in atomic fashion (without grabbing cpus_read_lock()
> on CONFIG_JUMP_LABEL=y). This is needed to add a new user for
-ENOTHERE, did you forget to Cc me on all patches?
> a static_key when the caller controls the lifetime of another user.
> The exact detail where it will be used: if a listen socket with TCP-MD5
> key receives SYN packet that passes the verification and in result
> creates a request socket - it's all done from RX softirq. At that moment
> userspace can't lock the listen socket and remove that TCP-MD5 key, so
> the tcp_md5_needed static branch can't get disabled. But the refcounter
> of the static key needs to be adjusted to account for a new user
> (the request socket).
Arguably all this should be a separate patch. Also I'm hoping the caller
does something like WARN on failure?
> -static inline void static_key_slow_inc(struct static_key *key)
> +static inline bool static_key_fast_inc(struct static_key *key)
> {
> + int v, v1;
> +
> STATIC_KEY_CHECK_USE(key);
> - atomic_inc(&key->enabled);
> + /*
> + * Prevent key->enabled getting negative to follow the same semantics
> + * as for CONFIG_JUMP_LABEL=y, see kernel/jump_label.c comment.
> + */
> + for (v = atomic_read(&key->enabled); v >= 0 && (v + 1) > 0; v = v1) {
> + v1 = atomic_cmpxchg(&key->enabled, v, v + 1);
> + if (likely(v1 == v))
> + return true;
> + }
Please, use atomic_try_cmpxchg(), it then turns into something like:
int v = atomic_read(&key->enabled);
do {
if (v < 0 || (v + 1) < 0)
return false;
} while (!atomic_try_cmpxchg(&key->enabled, &v, v + 1))
return true;
> + return false;
> }
> +#define static_key_slow_inc(key) static_key_fast_inc(key)
>
> static inline void static_key_slow_dec(struct static_key *key)
> {
> diff --git a/kernel/jump_label.c b/kernel/jump_label.c
> index 714ac4c3b556..f2c1aa351d41 100644
> --- a/kernel/jump_label.c
> +++ b/kernel/jump_label.c
> @@ -113,11 +113,38 @@ int static_key_count(struct static_key *key)
> }
> EXPORT_SYMBOL_GPL(static_key_count);
>
> -void static_key_slow_inc_cpuslocked(struct static_key *key)
> +/***
> + * static_key_fast_inc - adds a user for a static key
> + * @key: static key that must be already enabled
> + *
> + * The caller must make sure that the static key can't get disabled while
> + * in this function. It doesn't patch jump labels, only adds a user to
> + * an already enabled static key.
> + *
> + * Returns true if the increment was done.
> + */
> +bool static_key_fast_inc(struct static_key *key)
Typically this primitive is called something_inc_not_zero().
> {
> int v, v1;
>
> STATIC_KEY_CHECK_USE(key);
> + /*
> + * Negative key->enabled has a special meaning: it sends
> + * static_key_slow_inc() down the slow path, and it is non-zero
> + * so it counts as "enabled" in jump_label_update(). Note that
> + * atomic_inc_unless_negative() checks >= 0, so roll our own.
> + */
> + for (v = atomic_read(&key->enabled); v > 0 && (v + 1) > 0; v = v1) {
> + v1 = atomic_cmpxchg(&key->enabled, v, v + 1);
> + if (likely(v1 == v))
> + return true;
> + }
Idem on atomic_try_cmpxchg().
> + return false;
> +}
> +EXPORT_SYMBOL_GPL(static_key_fast_inc);
> +
> +bool static_key_slow_inc_cpuslocked(struct static_key *key)
> +{
> lockdep_assert_cpus_held();
>
> /*
> @@ -126,17 +153,9 @@ void static_key_slow_inc_cpuslocked(struct static_key *key)
> * jump_label_update() process. At the same time, however,
> * the jump_label_update() call below wants to see
> * static_key_enabled(&key) for jumps to be updated properly.
> - *
> - * So give a special meaning to negative key->enabled: it sends
> - * static_key_slow_inc() down the slow path, and it is non-zero
> - * so it counts as "enabled" in jump_label_update(). Note that
> - * atomic_inc_unless_negative() checks >= 0, so roll our own.
> */
> - for (v = atomic_read(&key->enabled); v > 0; v = v1) {
> - v1 = atomic_cmpxchg(&key->enabled, v, v + 1);
> - if (likely(v1 == v))
> - return;
> - }
This does not in fact apply, since someone already converted to try_cmpxchg.
Powered by blists - more mailing lists