lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Fri, 19 Apr 2024 14:17:44 +0100
From: Will Deacon <will@...nel.org>
To: Rohan McLure <rmclure@...ux.ibm.com>
Cc: linuxppc-dev@...ts.ozlabs.org, mpe@...erman.id.au,
	linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
	arnd@...db.de, gautam@...ux.ibm.com
Subject: Re: [PATCH] asm-generic/mmiowb: Mark accesses to fix KCSAN warnings

On Thu, Apr 04, 2024 at 03:38:53PM +1100, Rohan McLure wrote:
> Prior to this patch, data races are detectable by KCSAN of the following
> forms:
> 
> [1] Asynchronous calls to mmiowb_set_pending() from an interrupt context
>     or otherwise outside of a critical section
> [2] Interrupted critical sections, where the interrupt will itself
>     acquire a lock
> 
> In case [1], calling context does not need an mmiowb() call to be
> issued, otherwise it would do so itself. Such calls to
> mmiowb_set_pending() are either idempotent or no-ops.
> 
> In case [2], irrespective of when the interrupt occurs, the interrupt
> will acquire and release its locks prior to its return, nesting_count
> will continue balanced. In the worst case, the interrupted critical
> section during a mmiowb_spin_unlock() call observes an mmiowb to be
> pending and afterward is interrupted, leading to an extraneous call to
> mmiowb(). This data race is clearly innocuous.
> 
> Resolve KCSAN warnings of type [1] by means of READ_ONCE, WRITE_ONCE.
> As increments and decrements to nesting_count are balanced by interrupt
> contexts, resolve type [2] warnings by simply revoking instrumentation,
> with data_race() rather than READ_ONCE() and WRITE_ONCE(), the memory
> consistency semantics of plain-accesses will still lead to correct
> behaviour.
> 
> Signed-off-by: Rohan McLure <rmclure@...ux.ibm.com>
> Reported-by: Michael Ellerman <mpe@...erman.id.au>
> Reported-by: Gautam Menghani <gautam@...ux.ibm.com>
> Tested-by: Gautam Menghani <gautam@...ux.ibm.com>
> Acked-by: Arnd Bergmann <arnd@...db.de>
> ---
> Previously discussed here:
> https://lore.kernel.org/linuxppc-dev/20230510033117.1395895-4-rmclure@linux.ibm.com/
> But pushed back due to affecting other architectures. Reissuing, to
> linuxppc-dev, as it does not enact a functional change.
> ---
>  include/asm-generic/mmiowb.h | 15 +++++++++------
>  1 file changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/include/asm-generic/mmiowb.h b/include/asm-generic/mmiowb.h
> index 5698fca3bf56..f8c7c8a84e9e 100644
> --- a/include/asm-generic/mmiowb.h
> +++ b/include/asm-generic/mmiowb.h
> @@ -37,25 +37,28 @@ static inline void mmiowb_set_pending(void)
>  	struct mmiowb_state *ms = __mmiowb_state();
>  
>  	if (likely(ms->nesting_count))
> -		ms->mmiowb_pending = ms->nesting_count;
> +		WRITE_ONCE(ms->mmiowb_pending, ms->nesting_count);
>  }
>  
>  static inline void mmiowb_spin_lock(void)
>  {
>  	struct mmiowb_state *ms = __mmiowb_state();
> -	ms->nesting_count++;
> +
> +	/* Increment need not be atomic. Nestedness is balanced over interrupts. */
> +	data_race(ms->nesting_count++);
>  }
>  
>  static inline void mmiowb_spin_unlock(void)
>  {
>  	struct mmiowb_state *ms = __mmiowb_state();
> +	u16 pending = READ_ONCE(ms->mmiowb_pending);
>  
> -	if (unlikely(ms->mmiowb_pending)) {
> -		ms->mmiowb_pending = 0;
> +	WRITE_ONCE(ms->mmiowb_pending, 0);

Why are you changing this store to be unconditional?

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ