[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070813110639.GB24018@shell.boston.redhat.com>
Date: Mon, 13 Aug 2007 07:06:39 -0400
From: Chris Snook <csnook@...hat.com>
To: linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>
Cc: akpm@...ux-foundation.org, paulmck@...ux.vnet.ibm.com,
Segher Boessenkool <segher@...nel.crashing.org>,
"Luck, Tony" <tony.luck@...el.com>,
Chris Friesen <cfriesen@...tel.com>,
"Robert P. J. Day" <rpjday@...dspring.com>
Subject: [PATCH 2/23] make atomic_read() and atomic_set() behavior consistent on alpha
From: Chris Snook <csnook@...hat.com>
Use volatile consistently in atomic.h on alpha.
Signed-off-by: Chris Snook <csnook@...hat.com>
--- linux-2.6.23-rc3-orig/include/asm-alpha/atomic.h 2007-07-08 19:32:17.000000000 -0400
+++ linux-2.6.23-rc3/include/asm-alpha/atomic.h 2007-08-13 05:00:36.000000000 -0400
@@ -14,21 +14,35 @@
/*
- * Counter is volatile to make sure gcc doesn't try to be clever
- * and move things around on us. We need to use _exactly_ the address
- * the user gave us, not some alias that contains the same information.
+ * Make sure gcc doesn't try to be clever and move things around
+ * on us. We need to use _exactly_ the address the user gave us,
+ * not some alias that contains the same information.
*/
-typedef struct { volatile int counter; } atomic_t;
-typedef struct { volatile long counter; } atomic64_t;
+typedef struct { int counter; } atomic_t;
+typedef struct { long counter; } atomic64_t;
#define ATOMIC_INIT(i) ( (atomic_t) { (i) } )
#define ATOMIC64_INIT(i) ( (atomic64_t) { (i) } )
-#define atomic_read(v) ((v)->counter + 0)
-#define atomic64_read(v) ((v)->counter + 0)
+static __inline__ int atomic_read(atomic_t *v)
+{
+ return *(volatile int *)&v->counter + 0;
+}
+
+static __inline__ long atomic64_read(atomic64_t *v)
+{
+ return *(volatile long *)&v->counter + 0;
+}
-#define atomic_set(v,i) ((v)->counter = (i))
-#define atomic64_set(v,i) ((v)->counter = (i))
+static __inline__ void atomic_set(atomic_t *v, int i)
+{
+ *(volatile int *)&v->counter = i;
+}
+
+static __inline__ void atomic64_set(atomic64_t *v, long i)
+{
+ *(volatile long *)&v->counter = i;
+}
/*
* To get proper branch prediction for the main line, we must branch
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists