lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTikWu2B512Sge7w=5T6LsOdedadfGHEC_Wi6YPZo@mail.gmail.com>
Date:	Tue, 25 Jan 2011 12:49:16 +1100
From:	Nick Piggin <npiggin@...il.com>
To:	Jeremy Fitzhardinge <jeremy@...p.org>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...e.hu>,
	"the arch/x86 maintainers" <x86@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Nick Piggin <npiggin@...nel.dk>,
	Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>
Subject: Re: [PATCH 3/6] x86/ticketlock: Use C for __ticket_spin_unlock

On Tue, Jan 25, 2011 at 12:42 PM, Jeremy Fitzhardinge <jeremy@...p.org> wrote:
> On 01/24/2011 05:13 PM, Nick Piggin wrote:
>> On Tue, Jan 25, 2011 at 10:41 AM, Jeremy Fitzhardinge <jeremy@...p.org> wrote:
>>> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>
>>>
>>> If we don't need to use a locked inc for unlock, then implement it in C.
>>>
>>> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>
>>> ---
>>>  arch/x86/include/asm/spinlock.h |   32 +++++++++++++++++---------------
>>>  1 files changed, 17 insertions(+), 15 deletions(-)
>>>
>>> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
>>> index f48a6e3..0170ba9 100644
>>> --- a/arch/x86/include/asm/spinlock.h
>>> +++ b/arch/x86/include/asm/spinlock.h
>>> @@ -33,9 +33,21 @@
>>>  * On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock
>>>  * (PPro errata 66, 92)
>>>  */
>>> -# define UNLOCK_LOCK_PREFIX LOCK_PREFIX
>>> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
>>> +{
>>> +       if (sizeof(lock->tickets.head) == sizeof(u8))
>>> +               asm (LOCK_PREFIX "incb %0"
>>> +                    : "+m" (lock->tickets.head) : : "memory");
>>> +       else
>>> +               asm (LOCK_PREFIX "incw %0"
>>> +                    : "+m" (lock->tickets.head) : : "memory");
>>> +
>>> +}
>>>  #else
>>> -# define UNLOCK_LOCK_PREFIX
>>> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
>>> +{
>>> +       lock->tickets.head++;
>>> +}
>>>  #endif
>>>
>>>  /*
>>> @@ -93,14 +105,6 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
>>>
>>>        return tmp;
>>>  }
>>> -
>>> -static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
>>> -{
>>> -       asm volatile(UNLOCK_LOCK_PREFIX "incb %0"
>>> -                    : "+m" (lock->slock)
>>> -                    :
>>> -                    : "memory", "cc");
>>> -}
>>>  #else
>>>  static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
>>>  {
>>> @@ -144,15 +148,13 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
>>>
>>>        return tmp;
>>>  }
>>> +#endif
>>>
>>>  static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
>>>  {
>>> -       asm volatile(UNLOCK_LOCK_PREFIX "incw %0"
>>> -                    : "+m" (lock->slock)
>>> -                    :
>>> -                    : "memory", "cc");
>>> +       __ticket_unlock_release(lock);
>>> +       barrier();              /* prevent reordering into locked region */
>>>  }
>>> -#endif
>> The barrier is wrong.
>
> In what way?  Do you think it should be on the other side?

Well the other side is where the locked region is :)


>> What makes me a tiny bit uneasy is that gcc is allowed to implement
>> this any way it wishes. OK there may be a NULL intersection of possible
>> valid assembly which is a buggy unlock... but relying on gcc to implement
>> lock primitives is scary. Does this really help in a way that can't be done
>> with the assembly versions?
>
> We rely on C/gcc for plenty of other subtle ordering things.  Spinlocks
> aren't particularly special in this regard.

Well probably not orderings, but we do rely on it to do atomic
stores and loads to <= sizeof(long) data types, unfortunately.
We also rely on it not to speculatively store into data on not taken
side of branches and things like that.

I guess it's OK, but it makes me cringe a little bit to see unlock just
do head++
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ