lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 10 Jan 2014 17:12:27 +0100
From:	Oleg Nesterov <oleg@...hat.com>
To:	Andrea Arcangeli <aarcange@...hat.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	Mel Gorman <mgorman@...e.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Dave Jones <davej@...hat.com>,
	Darren Hart <dvhart@...ux.intel.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	Heiko Carstens <heiko.carstens@...ibm.com>
Subject: Re: [PATCH v2 1/1] mm: fix the theoretical compound_lock() vs
	prep_new_page() race

On 01/09, Andrea Arcangeli wrote:
>
> >
> > But we probably need barrier() in between, we can't use ACCESS_ONCE().
>
> After get_page_unless_zero I don't think there's any need of
> barrier(). barrier() should have been implicit in __atomic_add_unless
> in fact it should be a full smp_mb() equivalent too. Memory is always
> clobbered there and the asm is volatile.

Yes, yes,

> My wondering was only about the runtime (not compiler) barrier after
> running PageTail and before compound_lock,

Yes, this is what I meant.

Except I really meant the compiler barrier, although I do not think it
is actually needed, test_and_set_bit() implies mb().

> because bit_spin_lock has
> only acquire semantics so in absence of the branch that bails out the
> lock, the spinlock could run before PageTail. If the branch is good
> enough guarantee for all archs it's good and cheap solution.

The recent "[PATCH v6 tip/core/locking 3/8] Documentation/memory-barriers.txt:
Prohibit speculative writes" from Paul says:

	No SMP architecture currently supporting Linux allows speculative writes,

	...

	+ACCESS_ONCE(), which preserves the ordering between
	+the load from variable 'a' and the store to variable 'b':
	+
	+       q = ACCESS_ONCE(a);
	+       if (q) {
	+               ACCESS_ONCE(b) = p;
	+               do_something();
	+       }


We can't use ACCESS_ONCE(), but I think that

		if (PageTail(page)) {
			barrier();
			compound_lock(page_head);
		}

should obviously work (even if compound_lock() didn't imply mb).

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ