lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55A6F84F.5050502@hp.com>
Date:	Wed, 15 Jul 2015 20:18:23 -0400
From:	Waiman Long <waiman.long@...com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	Ingo Molnar <mingo@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
	linux-kernel@...r.kernel.org, Scott J Norton <scott.norton@...com>,
	Douglas Hatch <doug.hatch@...com>,
	Davidlohr Bueso <dave@...olabs.net>
Subject: Re: [PATCH v2 1/6] locking/pvqspinlock: Unconditional PV kick with
 _Q_SLOW_VAL

On 07/15/2015 05:10 AM, Peter Zijlstra wrote:
> On Tue, Jul 14, 2015 at 10:13:32PM -0400, Waiman Long wrote:
>> The smp_store_release() is not a full barrier. In order to avoid missed
>> wakeup, we may need to add memory barrier around locked and cpu state
>> variables adding to complexity. As the chance of spurious wakeup is very
>> low, it is easier and safer to just do an unconditional kick at unlock
>> time.
> I have the below patch. We need that rmb in there anyhow for the hash to
> work.
>
> ---
> Subject: locking/pvqspinlock: Order pv_unhash after cmpxchg on unlock slowpath
> From: Will Deacon<will.deacon@....com>
> Date: Mon, 13 Jul 2015 16:58:30 +0100
>
> When we unlock in __pv_queued_spin_unlock, a failed cmpxchg on the lock
> value indicates that we need to take the slow-path and unhash the
> corresponding node blocked on the lock.
>
> Since a failed cmpxchg does not provide any memory-ordering guarantees,
> it is possible that the node data could be read before the cmpxchg on
> weakly-ordered architectures and therefore return a stale value, leading
> to hash corruption and/or a BUG().
>
> This patch adds an smb_rmb() following the failed cmpxchg operation, so
> that the unhashing is ordered after the lock has been checked.
>
> Cc: Paul McKenney<paulmck@...ux.vnet.ibm.com>
> Cc: Waiman Long<Waiman.Long@...com>
> Cc: Steve Capper<Steve.Capper@....com>
> Reported-by: Peter Zijlstra<peterz@...radead.org>
> Signed-off-by: Will Deacon<will.deacon@....com>
> [peterz: More comments]
> Signed-off-by: Peter Zijlstra (Intel)<peterz@...radead.org>
> Link: http://lkml.kernel.org/r/20150713155830.GL2632@arm.com
> ---
>   kernel/locking/qspinlock_paravirt.h |   23 ++++++++++++++++++-----
>   1 file changed, 18 insertions(+), 5 deletions(-)
>
> --- a/kernel/locking/qspinlock_paravirt.h
> +++ b/kernel/locking/qspinlock_paravirt.h
> @@ -244,13 +244,17 @@ static void pv_wait_head(struct qspinloc
>   		if (!lp) { /* ONCE */
>   			lp = pv_hash(lock, pn);
>   			/*
> -			 * lp must be set before setting _Q_SLOW_VAL
> +			 * We must hash before setting _Q_SLOW_VAL, such that
> +			 * when we observe _Q_SLOW_VAL in __pv_queued_spin_unlock()
> +			 * we'll be sure to be able to observe our hash entry.
>   			 *
> -			 * [S] lp = lock                [RmW] l = l->locked = 0
> -			 *     MB                             MB
> -			 * [S] l->locked = _Q_SLOW_VAL  [L]   lp
> +			 *   [S] pn->state
> +			 *   [S]<hash>                  [Rmw] l->locked == _Q_SLOW_VAL
> +			 *       MB                           RMB
> +			 * [RmW] l->locked = _Q_SLOW_VAL  [L]<unhash>
> +			 *                                [L] pn->state
>   			 *
> -			 * Matches the cmpxchg() in __pv_queued_spin_unlock().
> +			 * Matches the smp_rmb() in __pv_queued_spin_unlock().
>   			 */
>   			if (!cmpxchg(&l->locked, _Q_LOCKED_VAL, _Q_SLOW_VAL)) {
>   				/*
> @@ -306,6 +310,15 @@ __visible void __pv_queued_spin_unlock(s
>   	}
>
>   	/*
> +	 * A failed cmpxchg doesn't provide any memory-ordering guarantees,
> +	 * so we need a barrier to order the read of the node data in
> +	 * pv_unhash *after* we've read the lock being _Q_SLOW_VAL.
> +	 *
> +	 * Matches the cmpxchg() in pv_wait_head() setting _Q_SLOW_VAL.
> +	 */
> +	smp_rmb();

According to memory_barriers.txt, cmpxchg() is a full memory barrier. It 
didn't say a failed cmpxchg will lose its memory guarantee. So is the 
documentation right? Or is that true for some architectures? I think it 
is not true for x86.

Cheers,
Longman


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ