lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100720153845.GA9122@phenom.dumpdata.com>
Date:	Tue, 20 Jul 2010 11:38:45 -0400
From:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To:	Jeremy Fitzhardinge <jeremy@...p.org>
Cc:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Nick Piggin <npiggin@...e.de>,
	Peter Zijlstra <peterz@...radead.org>,
	Xen-devel <xen-devel@...ts.xensource.com>,
	Avi Kivity <avi@...hat.com>, Jan Beulich <JBeulich@...ell.com>
Subject: Re: [Xen-devel] [PATCH RFC 03/12] x86/ticketlock: Use C for
 __ticket_spin_unlock

> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -33,9 +33,23 @@
>   * On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock
>   * (PPro errata 66, 92)
>   */
> -# define UNLOCK_LOCK_PREFIX LOCK_PREFIX
> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
> +{
> +	if (sizeof(lock->tickets.head) == sizeof(u8))
> +		asm (LOCK_PREFIX "incb %0"
> +		     : "+m" (lock->tickets.head) : : "memory");
> +	else
> +		asm (LOCK_PREFIX "incw %0"
> +		     : "+m" (lock->tickets.head) : : "memory");

Should those be 'asm volatile' to make them barriers as well? Or do we
not have to worry about that on a Pentium Pro SMP?

> +
> +}
>  #else
> -# define UNLOCK_LOCK_PREFIX
> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
> +{
> +	barrier();
> +	lock->tickets.head++;
> +	barrier();
> +}

Got a question:
This extra barrier() (which I see gets removed in git tree) was
done b/c the function is inlined and hence the second barrier() inhibits
gcc from re-ordering __ticket_spin_unlock instructions? Which is a big
pre-requisite in patch 7 where this function expands to:


 static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 {
       __ticket_t next = lock->tickets.head + 1; // This code
is executed before the lock->tickets.head++ b/c of the 1st barrier?
Or would it be done irregardless b/c gcc sees the data dependency here?

        __ticket_unlock_release(lock); <- expands to
"barrier();lock->tickets.head++;barrier()" 

+       __ticket_unlock_kick(lock, next);   <- so now the second barrier()
affects this code, so it won't re-order the lock->tickets.head++ to be called
after this function?


This barrier ("asm volatile("" : : : "memory")); from what I've been reading
says : "Don't re-order the instructions within this scope and starting
right below me." ? Or is it is just within the full scope of the
function/code logic irregardless of the 'inline' defined in one of them?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ