lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191009092508.GH2311@hirez.programming.kicks-ass.net>
Date:   Wed, 9 Oct 2019 11:25:08 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Will Deacon <will@...nel.org>
Cc:     linux-kernel@...r.kernel.org, Kees Cook <keescook@...omium.org>,
        Ingo Molnar <mingo@...nel.org>,
        Elena Reshetova <elena.reshetova@...el.com>,
        Ard Biesheuvel <ard.biesheuvel@...aro.org>,
        Hanjun Guo <guohanjun@...wei.com>,
        Jan Glauber <jglauber@...vell.com>
Subject: Re: [PATCH v3 05/10] lib/refcount: Improve performance of generic
 REFCOUNT_FULL code

On Mon, Oct 07, 2019 at 04:46:58PM +0100, Will Deacon wrote:
> Rewrite the generic REFCOUNT_FULL implementation so that the saturation
> point is moved to INT_MIN / 2. This allows us to defer the sanity checks
> until after the atomic operation, which removes many uses of cmpxchg()
> in favour of atomic_fetch_{add,sub}().

It also radicaly changes behaviour, and afaict is subtly broken, see
below.

> Some crude perf results obtained from lkdtm show substantially less
> overhead, despite the checking:
> 
>  $ perf stat -r 3 -B -- echo {ATOMIC,REFCOUNT}_TIMING >/sys/kernel/debug/provoke-crash/DIRECT
> 
>  # arm64
>  ATOMIC_TIMING:                                      46.50451 +- 0.00134 seconds time elapsed  ( +-  0.00% )
>  REFCOUNT_TIMING (REFCOUNT_FULL, mainline):          77.57522 +- 0.00982 seconds time elapsed  ( +-  0.01% )
>  REFCOUNT_TIMING (REFCOUNT_FULL, this series):       48.7181 +- 0.0256 seconds time elapsed  ( +-  0.05% )
> 
>  # x86
>  ATOMIC_TIMING:                                      31.6225 +- 0.0776 seconds time elapsed  ( +-  0.25% )
>  REFCOUNT_TIMING (!REFCOUNT_FULL, mainline/x86 asm): 31.6689 +- 0.0901 seconds time elapsed  ( +-  0.28% )
>  REFCOUNT_TIMING (REFCOUNT_FULL, mainline):          53.203 +- 0.138 seconds time elapsed  ( +-  0.26% )
>  REFCOUNT_TIMING (REFCOUNT_FULL, this series):       31.7408 +- 0.0486 seconds time elapsed  ( +-  0.15% )

I would _really_ like words on how this is racy and how it probably
doesn't matter.

> Cc: Ingo Molnar <mingo@...nel.org>
> Cc: Elena Reshetova <elena.reshetova@...el.com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Ard Biesheuvel <ard.biesheuvel@...aro.org>
> Tested-by: Hanjun Guo <guohanjun@...wei.com>
> Tested-by: Jan Glauber <jglauber@...vell.com>
> Reviewed-by: Kees Cook <keescook@...omium.org>
> Signed-off-by: Will Deacon <will@...nel.org>
> ---
>  include/linux/refcount.h | 87 ++++++++++++++++------------------------
>  1 file changed, 34 insertions(+), 53 deletions(-)
> 
> diff --git a/include/linux/refcount.h b/include/linux/refcount.h
> index e719b5b1220e..7f9aa6511142 100644
> +++ b/include/linux/refcount.h
> @@ -47,8 +47,8 @@ static inline unsigned int refcount_read(const refcount_t *r)
>  #ifdef CONFIG_REFCOUNT_FULL
>  #include <linux/bug.h>
>  
> +#define REFCOUNT_MAX		INT_MAX
> +#define REFCOUNT_SATURATED	(INT_MIN / 2)
>  
>  /*
>   * Variant of atomic_t specialized for reference counts.
> @@ -109,25 +109,19 @@ static inline unsigned int refcount_read(const refcount_t *r)
>   */
>  static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r)
>  {
> +	int old = refcount_read(r);
>  
>  	do {
> +		if (!old)
> +			break;
> +	} while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i));
>  
> +	if (unlikely(old < 0 || old + i < 0)) {

So this is obviously racy against itself and other operations.
Particularly refcount_read(), as the sole API member that actually
exposes the value, is affected.

Yes, it shouldn't happen and we'll have trouble if we ever hit this, but
are all refcount_read() users sane enough to not cause further trouble?

> +		refcount_set(r, REFCOUNT_SATURATED);
> +		WARN_ONCE(1, "refcount_t: saturated; leaking memory.\n");
> +	}
>  
> +	return old;
>  }
>  
>  /**
> @@ -148,7 +142,13 @@ static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r)
>   */
>  static inline void refcount_add(int i, refcount_t *r)
>  {
> +	int old = atomic_fetch_add_relaxed(i, &r->refs);
> +
> +	WARN_ONCE(!old, "refcount_t: addition on 0; use-after-free.\n");

This is a change in behaviour vs the old one; the previous verion would
not change the value this will.

Is it important, I don't know, but it's not documented.

> +	if (unlikely(old <= 0 || old + i <= 0)) {
> +		refcount_set(r, REFCOUNT_SATURATED);
> +		WARN_ONCE(old, "refcount_t: saturated; leaking memory.\n");
> +	}
>  }
>  
>  /**

> @@ -224,26 +208,19 @@ static inline void refcount_inc(refcount_t *r)
>   */
>  static inline __must_check bool refcount_sub_and_test(int i, refcount_t *r)
>  {
> +	int old = atomic_fetch_sub_release(i, &r->refs);
>  
> +	if (old == i) {
>  		smp_acquire__after_ctrl_dep();
>  		return true;
>  	}
>  
> +	if (unlikely(old - i < 0)) {
> +		refcount_set(r, REFCOUNT_SATURATED);
> +		WARN_ONCE(1, "refcount_t: underflow; use-after-free.\n");
> +	}

I'm failing to see how this preserves REFCOUNT_SATURATED for
non-underflow. AFAICT this should have:

	if (unlikely(old == REFCOUNT_SATURATED || old - i < 0))

> +	return false;
>  }
>  
>  /**
> @@ -276,9 +253,13 @@ static inline __must_check bool refcount_dec_and_test(refcount_t *r)
>   */
>  static inline void refcount_dec(refcount_t *r)
>  {
> +	int old = atomic_fetch_sub_release(1, &r->refs);
>  
> +	if (unlikely(old <= 1)) {

Idem.

> +		refcount_set(r, REFCOUNT_SATURATED);
> +		WARN_ONCE(1, "refcount_t: decrement hit 0; leaking memory.\n");
> +	}
> +}

Also, things like refcount_dec_not_one() might need fixing to preserve
REFCOUNT_SATURATED, because they're not expecting that value to actually
change, but you do!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ