lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <521BB71F.6080300@hp.com>
Date:	Mon, 26 Aug 2013 16:14:23 -0400
From:	Waiman Long <waiman.long@...com>
To:	Alexander Fyodorov <halcy@...dex.ru>
CC:	linux-kernel <linux-kernel@...r.kernel.org>,
	"Chandramouleeswaran, Aswin" <aswin@...com>,
	"Norton, Scott J" <scott.norton@...com>,
	Peter Zijlstra <peterz@...radead.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>
Subject: Re: [PATCH RFC v2 1/2] qspinlock: Introducing a 4-byte queue spinlock
 implementation

On 08/22/2013 09:28 AM, Alexander Fyodorov wrote:
> 22.08.2013, 05:04, "Waiman Long"<waiman.long@...com>:
>> On 08/21/2013 11:51 AM, Alexander Fyodorov wrote:
>> In this case, we should have smp_wmb() before freeing the lock. The
>> question is whether we need to do a full mb() instead. The x86 ticket
>> spinlock unlock code is just a regular add instruction except for some
>> exotic processors. So it is a compiler barrier but not really a memory
>> fence. However, we may need to do a full memory fence for some other
>> processors.
> The thing is that x86 ticket spinlock code does have full memory barriers both in lock() and unlock() code: "add" instruction there has "lock" prefix which implies a full memory barrier. So it is better to use smp_mb() and let each architecture define it.

I also thought that the x86 spinlock unlock path was an atomic add. It 
just comes to my realization recently that this is not the case. The 
UNLOCK_LOCK_PREFIX will be mapped to "" except for some old 32-bit x86 
processors.

>> At this point, I am inclined to have either a smp_wmb() or smp_mb() at
>> the beginning of the unlock function and a barrier() at the end.
>>
>> As the lock/unlock functions can be inlined, it is possible that a
>> memory variable can be accessed earlier in the calling function and the
>> stale copy may be used in the inlined lock/unlock function instead of
>> fetching a new copy. That is why I prefer a more liberal use of
>> ACCESS_ONCE() for safety purpose.
> That is impossible: both lock() and unlock() must have either full memory barrier or an atomic operation which returns value. Both of them prohibit optimizations and compiler cannot reuse any global variable. So this usage of ACCESS_ONCE() is unneeded.
>
> You can read more on this in Documentation/volatile-considered-harmful.txt
>
> And although I already suggested that, have you read Documentation/memory-barriers.txt? There is a lot of valuable information there.

I did read Documentation/memory-barriers.txt. I will read 
volatile-considered-harmful.txt.

Regards,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ