lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 3 Feb 2012 09:16:43 -0800
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Andrew MacLeod <amacleod@...hat.com>
Cc:	paulmck@...ux.vnet.ibm.com, Torvald Riegel <triegel@...hat.com>,
	Jan Kara <jack@...e.cz>, LKML <linux-kernel@...r.kernel.org>,
	linux-ia64@...r.kernel.org, dsterba@...e.cz, ptesarik@...e.cz,
	rguenther@...e.de, gcc@....gnu.org
Subject: Re: Memory corruption due to word sharing

On Fri, Feb 3, 2012 at 8:38 AM, Andrew MacLeod <amacleod@...hat.com> wrote:
>
> The atomic intrinsics were created for c++11  memory model compliance, but I
> am certainly open to enhancements that would make them more useful.   I am
> planning some enhancements for 4.8 now, and it sounds like you may have some
> suggestions...

So we have several atomics we use in the kernel, with the more common being

 - add (and subtract) and cmpchg of both 'int' and 'long'

 - add_return (add and return new value)

 - special cases of the above:
      dec_and_test (decrement and test result for zero)
      inc_and_test (decrement and test result for zero)
      add_negative (add and check if result is negative)

   The special cases are because older x86 cannot do the generic
"add_return" efficiently - it needs xadd - but can do atomic versions
that test the end result and give zero or sign information.

 - atomic_add_unless() - basically an optimized cmpxchg.

 - atomic bit array operations (bit set, clear, set-and-test,
clear-and-test). We do them on "unsigned long" exclusively, and in
fact we do them on arrays of unsigned long, ie we have the whole "bts
reg,mem" semantics. I'm not sure we really care about the atomic
versions for the arrays, so it's possible we only really care about a
single long.

   The only complication with the bit setting is that we have a
concept of "set/clear bit with memory barrier before or after the bit"
(for locking). We don't do the whole release/acquire thing, though.

 - compare_xchg_double

We also do byte/word atomic increments and decrements, but that' sin
the x86 spinlock implementation, so it's not a generic need.

We also do the add version in particular as CPU-local optimizations
that do not need to be SMP-safe, but do need to be interrupt-safe. On
x86, this is just an r-m-w op, on most other architectures it ends up
being the usual load-locked/store-conditional.

I think that's pretty much it, but maybe I'm missing something.

Of course, locking itself tends to be special cases of the above with
extra memory barriers, but it's usually hidden in asm for other
reasons (the bit-op + barrier being a special case).

                      Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ