lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100817141638.GA5722@Krystal>
Date:	Tue, 17 Aug 2010 10:16:38 -0400
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	Steven Rostedt <rostedt@...dmis.org>
Cc:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	linux-kernel@...r.kernel.org, mingo@...e.hu, laijs@...fujitsu.com,
	dipankar@...ibm.com, akpm@...ux-foundation.org,
	josh@...htriplett.org, dvhltc@...ibm.com, niv@...ibm.com,
	tglx@...utronix.de, peterz@...radead.org, Valdis.Kletnieks@...edu,
	dhowells@...hat.com, eric.dumazet@...il.com,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH tip/core/rcu 08/10] rcu: Add a TINY_PREEMPT_RCU

* Steven Rostedt (rostedt@...dmis.org) wrote:
> On Mon, 2010-08-16 at 18:07 -0400, Mathieu Desnoyers wrote:
> 
> > > Moving this down past the check of t->rcu_read_lock_special (which is
> > > now covered by ACCESS_ONCE()) would violate the C standard, as it would
> > > be equivalent to moving a volatile up past a sequence point.
> > 
> > Hrm, I'm not quite convinced yet. I am not concerned about gcc moving
> > the volatile access prior to the sequence point (as you say, this is
> > forbidden by the C standard), but rather that:
> > 
> > --(t->rcu_read_lock_nesting)
> > 
> > could be split in two distinct operations:
> > 
> > read t->rcu_read_lock_nesting
> > decrement t->rcu_read_lock_nesting
> > 
> > Note that in order to know the result required to pass the sequence
> > point "&&" (the test), we only need to perform the read, not the
> > decrement. AFAIU, gcc would be in its rights to move the
> > t->rcu_read_lock_nesting update after the volatile access.
> > 
> 
> If we are this concerned, what about just doing:
> 
> 	--t->rcu_read_lock_nesting;
>         if (ACCESS_ONCE(t->rcu_read_lock_nesting) == 0 &&
>              unlikely((ACCESS_ONCE(t->rcu_read_unlock_special)))

I'd be concerned by the fact that there is no strong ordering guarantee
that the non-volatile --t->rcu_read_lock_nesting is done before
ACCESS_ONCE(t->rcu_read_unlock_special).

My concern is that the compiler might be allowed to turn your code into:

        if (ACCESS_ONCE(t->rcu_read_lock_nesting) == 1 &&
             unlikely((ACCESS_ONCE(t->rcu_read_unlock_special))) {
 		--t->rcu_read_lock_nesting;
		do_something();
	} else
	 	--t->rcu_read_lock_nesting;

So whether or not this could be done by the compiler depending on the
various definitions of volatile, I strongly recommend against using
volatile accesses to provide compiler ordering guarantees. It is bad in
terms of code documentation (we don't document _what_ is ordered) and it
is also bad because the volatile ordering guarantees seems to be
very easy to misinterpret.

ACCESS_ONCE() should be only that: a macro that tells the access should
be performed only once. Why are we suddenly presuming it should have any
ordering semantic ?

It should be totally valid to create arch-specific ACCESS_ONCE() macros
that only perform the "read once", without the ordering guarantees
provided by the current ACCESS_ONCE() "volatile" implementation. The
following code is only for unsigned long, but you get the idea: there is
no volatile at all, and I ensure that "val" is only read once by using
the "+m" (val) constraint, telling the compiler (falsely) that the
assembler is modifying the value (it therefore has a side-effect), so
gcc won't be tempted to re-issue the assembly statement.

static inline unsigned long arch_access_once(unsigned long val)
{
	unsigned long ret;

#if (__BITS_PER_LONG == 32)
	asm ("movl %1,%0": "=r" (ret), "+m" (val));
#else
	asm ("movq %1,%0": "=r" (ret), "+m" (val));
#endif
}

Thanks,

Mathieu

> 
> -- Steve
> 
> 

-- 
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ