lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.1.10.0806041249470.3473@woody.linux-foundation.org>
Date:	Wed, 4 Jun 2008 12:57:13 -0700 (PDT)
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Nick Piggin <npiggin@...e.de>
cc:	Ingo Molnar <mingo@...e.hu>, David Howells <dhowells@...hat.com>,
	Ulrich Drepper <drepper@...hat.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 0/3] 64-bit futexes: Intro



On Tue, 3 Jun 2008, Nick Piggin wrote:
> 
> I think optimised our unlock_page in a way that it can do a
> non-atomic unlock in the fastpath (no waiters) using 2 bits. In
> practice it was still atomic but only because other page flags
> operations could operate on ->flags at the same time.

I'd be *very* nervous about this.

> We don't require any load/store barrier in the unlock_page fastpath
> because the bits are in the same word, so cache coherency gives us a
> sequential ordering anyway.

Yes and no.

Yes, the bits are int he same word, so cache coherency guarantees a lot.

HOWEVER. If you do the sub-word write using a regular store, you are now 
invoking the _one_ non-coherent part of the x86 memory pipeline: the store 
buffer. Normal stores can (and will) be forwarded to subsequent loads from 
the store buffer, and they are not strongly ordered wrt cache coherency 
while they are buffered.

IOW, on x86, loads are ordered wrt loads, and stores are ordered wrt other 
stores, but loads are *not* ordered wrt other stores in the absense of a 
serializing instruction, and it's exactly because of the write buffer.

So:

> But actually if we're careful, we can put them in seperate parts of the 
> word and use the sub-word operations on x86 to avoid the atomic 
> requirement. I'm not aware of any architecture in which operations to 
> the same word could be out of order.

See above. The above is unsafe, because if you do a regular store to a 
partial word, with no serializing instructions between that and a 
subsequent load of the whole word, the value of the store can be bypassed 
from the store buffer, and the load from the other part of the word can be 
carried out _before_ the store has actually gotten that cacheline 
exclusively!

So when you do

	movb reg,(byteptr)
	movl (byteptr),reg

you may actually get old data in the upper 24 bits, along with new data in 
the lower 8.

I think.

Anyway, be careful. The cacheline itself will always be coherent, but the 
store buffer is not going to be part of the coherency rules, and without 
serialization (or locked ops), you _are_ going to invoke the store buffer!

		Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ