lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 12 Jun 2014 10:17:31 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Waiman Long <Waiman.Long@...com>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>, linux-arch@...r.kernel.org,
	x86@...nel.org, linux-kernel@...r.kernel.org,
	virtualization@...ts.linux-foundation.org,
	xen-devel@...ts.xenproject.org, kvm@...r.kernel.org,
	Paolo Bonzini <paolo.bonzini@...il.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
	Boris Ostrovsky <boris.ostrovsky@...cle.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Rik van Riel <riel@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>,
	David Vrabel <david.vrabel@...rix.com>,
	Oleg Nesterov <oleg@...hat.com>,
	Gleb Natapov <gleb@...hat.com>,
	Scott J Norton <scott.norton@...com>,
	Chegu Vinod <chegu_vinod@...com>
Subject: Re: [PATCH v11 14/16] pvqspinlock: Add qspinlock para-virtualization
 support

On Fri, May 30, 2014 at 11:44:00AM -0400, Waiman Long wrote:
> @@ -19,13 +19,46 @@ extern struct static_key virt_unfairlocks_enabled;
>   * that the clearing the lock bit is done ASAP without artificial delay
>   * due to compiler optimization.
>   */
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +static __always_inline void __queue_spin_unlock(struct qspinlock *lock)
> +#else
>  static inline void queue_spin_unlock(struct qspinlock *lock)
> +#endif
>  {
>  	barrier();
>  	ACCESS_ONCE(*(u8 *)lock) = 0;
>  	barrier();
>  }
>  
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +/*
> + * The lock byte can have a value of _Q_LOCKED_SLOWPATH to indicate
> + * that it needs to go through the slowpath to do the unlocking.
> + */
> +#define _Q_LOCKED_SLOWPATH	(_Q_LOCKED_VAL | 2)
> +
> +extern void queue_spin_unlock_slowpath(struct qspinlock *lock);
> +
> +static inline void queue_spin_unlock(struct qspinlock *lock)
> +{
> +	barrier();
> +	if (static_key_false(&paravirt_spinlocks_enabled)) {
> +		/*
> +		 * Need to atomically clear the lock byte to avoid racing with
> +		 * queue head waiter trying to set _QLOCK_LOCKED_SLOWPATH.
> +		 */
> +		if (likely(cmpxchg((u8 *)lock, _Q_LOCKED_VAL, 0)
> +				== _Q_LOCKED_VAL))
> +			return;
> +		else
> +			queue_spin_unlock_slowpath(lock);
> +
> +	} else {
> +		__queue_spin_unlock(lock);
> +	}
> +	barrier();
> +}
> +#endif /* CONFIG_PARAVIRT_SPINLOCKS */

Ideally we'd make all this use alternatives or so, such that the actual
function remains short enough to actually inline;

static inline void queue_spin_unlock(struct qspinlock *lock)
{
	pv_spinlock_alternative(
		ACCESS_ONCE(*(u8 *)lock) = 0,
		pv_queue_spin_unlock(lock));
}

Or however that trickery works.

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ