lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 15 Jan 2013 18:56:42 -0500 From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com> To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> Cc: lttng-dev@...ts.lttng.org, linux-kernel@...r.kernel.org, rp@...s.cs.pdx.edu, khlebnikov@...nvz.org, stern@...land.harvard.edu, shemminger@...tta.com Subject: Re: [lttng-dev] [PATCH] Add ACCESS_ONCE() to avoid compiler splitting assignments * Paul E. McKenney (paulmck@...ux.vnet.ibm.com) wrote: > As noted by Konstantin Khlebnikov, gcc can split assignment of > constants to long variables (https://lkml.org/lkml/2013/1/15/141), > though assignment of NULL (0) is OK. Assuming that a gcc bug is > fixed (http://gcc.gnu.org/bugzilla/attachment.cgi?id=29169&action=diff > has a patch), making the store be volatile keeps gcc from splitting. > > This commit therefore applies ACCESS_ONCE() to CMM_STORE_SHARED(), > which is the underlying primitive used by rcu_assign_pointer(). Hi Paul, I recognise that this is an issue in the Linux kernel, since a simple store is used and expected to be performed atomically when aligned. However, I think this does not affect liburcu, see below: > > Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com> > > diff --git a/urcu/system.h b/urcu/system.h > index 2a45f22..7a1887e 100644 > --- a/urcu/system.h > +++ b/urcu/system.h > @@ -49,7 +49,7 @@ > */ > #define CMM_STORE_SHARED(x, v) \ > ({ \ > - __typeof__(x) _v = _CMM_STORE_SHARED(x, v); \ > + __typeof__(x) CMM_ACCESS_ONCE(_v) = _CMM_STORE_SHARED(x, v); \ Here, the macro "_CMM_STORE_SHARED(x, v)" is doing the actual store. It stores v into "x". So adding a CMM_ACCESS_ONCE(_v), as you propose here, is really only making sure the return value (usually unused), located on the stack, is accessed with a volatile access, which does not make much sense. What really matters is the _CMM_STORE_SHARED() macro: #define _CMM_STORE_SHARED(x, v) ({ CMM_ACCESS_ONCE(x) = (v); }) which already uses a volatile access for the store. So this seems to be a case where our preemptive use of volatile for stores in addition to loads made us bug-free for a gcc behavior unexpected at the time we implemented this macro. Just a touch of paranoia seems to be a good thing sometimes. ;-) Thoughts ? Thanks, Mathieu > cmm_smp_wmc(); \ > _v; \ > }) > > > _______________________________________________ > lttng-dev mailing list > lttng-dev@...ts.lttng.org > http://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists