lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 19 Oct 2015 16:18:53 +0100
From:	Catalin Marinas <catalin.marinas@....com>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Will Deacon <will.deacon@....com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Oleg Nesterov <oleg@...hat.com>, Ingo Molnar <mingo@...nel.org>
Subject: Re: Q: schedule() and implied barriers on arm64

On Fri, Oct 16, 2015 at 10:28:11AM -0700, Paul E. McKenney wrote:
> So RCU needs the following sort of guarantee:
> 
> 	void task1(unsigned long flags)
> 	{
> 		WRITE_ONCE(x, 1);
> 		WRITE_ONCE(z, 1);
> 		raw_spin_unlock_irqrestore(&rnp->lock, flags);
> 	}
> 
> 	void task2(unsigned long *flags)
> 	{
> 		raw_spin_lock_irqsave(&rnp->lock, *flags);
> 		smp_mb__after_unlock_lock();
> 		r1 = READ_ONCE(y);
> 		r2 = READ_ONCE(z);
> 	}
> 
> 	void task3(void)
> 	{
> 		WRITE_ONCE(y, 1);
> 		smp_mb();
> 		r3 = READ_ONCE(x);
> 	}
> 
> 	BUG_ON(!r1 && r2 && !r3); /* After the dust settles. */
> 
> In other words, if task2() acquires the lock after task1() releases it,
> all CPUs must agree on the order of the operations in the two critical
> sections, even if these other CPUs don't acquire the lock.
> 
> This same guarantee is needed if task1() and then task2() run in
> succession on the same CPU with no additional synchronization of any sort.
> 
> Does this work on arm64?

I think it does. If r3 == 0, it means that READ_ONCE(x) in task3 is
"observed" (in ARM ARM terms) by task1 before WRITE_ONCE(x, 1). The
smp_mb() in task3 implies that WRITEONCE(y, 1) is also observed by
task1.

A store-release is multi-copy atomic when "observed" with a load-acquire
(from task2). When on the same CPU, they are always observed in program
order. The store-release on ARM has the property that writes observed by
task1 before store-release (that is WRITE_ONCE(y, 1) in task3) will be
observed by other observers (task2) before the store-release is observed
(the unlock).

The above rules guarantee that, when r3 == 0, WRITE_ONCE(y, 1) in task3
is observed by task2 (and task1), hence r1 == 1.

(a more formal proof would have to wait for Will to come back from
holiday ;))

-- 
Catalin

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ