lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070809142430.GA19817@shell.boston.redhat.com>
Date:	Thu, 9 Aug 2007 10:24:30 -0400
From:	Chris Snook <csnook@...hat.com>
To:	linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
	torvalds@...ux-foundation.org
Cc:	netdev@...r.kernel.org, akpm@...ux-foundation.org, ak@...e.de,
	heiko.carstens@...ibm.com, davem@...emloft.net,
	schwidefsky@...ibm.com, wensong@...ux-vs.org, horms@...ge.net.au,
	wjiang@...ilience.com, cfriesen@...tel.com, zlynx@....org,
	rpjday@...dspring.com, jesper.juhl@...il.com
Subject: [PATCH 24/24] document volatile atomic_read() behavior

From: Chris Snook <csnook@...hat.com>

Update atomic_ops.txt to reflect the newly consistent behavior of
atomic_read(), and to note that volatile (in declarations) is now
considered harmful.

Signed-off-by: Chris Snook <csnook@...hat.com>

--- linux-2.6.23-rc2-orig/Documentation/atomic_ops.txt	2007-07-08 19:32:17.000000000 -0400
+++ linux-2.6.23-rc2/Documentation/atomic_ops.txt	2007-08-09 08:24:32.000000000 -0400
@@ -12,7 +12,7 @@
 C integer type will fail.  Something like the following should
 suffice:
 
-	typedef struct { volatile int counter; } atomic_t;
+	typedef struct { int counter; } atomic_t;
 
 	The first operations to implement for atomic_t's are the
 initializers and plain reads.
@@ -38,9 +38,17 @@
 
 Next, we have:
 
-	#define atomic_read(v)	((v)->counter)
+	#define atomic_read(v)	(*(volatile int *)&(v)->counter)
 
-which simply reads the current value of the counter.
+which reads the counter as though it were volatile.  This prevents the
+compiler from optimizing away repeated atomic_read() invocations without
+requiring a more expensive barrier().  Historically this has been
+accomplished by declaring the counter itself to be volatile, but the
+ambiguity of the C standard on the semantics of volatile make this practice
+vulnerable to overly creative interpretation by compilers.  Explicit
+casting in atomic_read() ensures consistent behavior across architectures
+and compilers.  Even with this convenience in atomic_read(), busy-waiters
+should call cpu_relax().
 
 Now, we move onto the actual atomic operation interfaces.
 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ