lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200710260943.44719.nickpiggin@yahoo.com.au>
Date:	Fri, 26 Oct 2007 09:43:44 +1000
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	Andi Kleen <ak@...ell.com>
Cc:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: Is gcc thread-unsafe?

On Friday 26 October 2007 09:09, Andi Kleen wrote:
> On Friday 26 October 2007 00:49:42 Nick Piggin wrote:

> > Marking volatile I think is out of the question. To start with,
> > volatile creates really poor code (and most of the time we actually
> > do want the code in critical sections to be as tight as possible).
>
> Poor code is better than broken code I would say. Besides
> the cross CPU synchronization paths are likely dominated by
> cache misses anyways; it's unlikely they're actually limited
> by the core CPU. So it'll probably not matter all that much
> if the code is poor or not.

But we have to mark all memory access inside the critical section
as volatile, even after the CPU synchronisation, and often the
common case where there is no contention or cacheline bouncing.

Sure, real code is dominated by cache misses anyway, etc etc.
However volatile prevents a lot of real optimisations that we'd
actually want to do, and increases icache size etc.


> > But also because I don't think these bugs are just going to be
> > found easily.
> >
> > > It might be useful to come up with some kind of assembler pattern
> > > matcher to check if any such code is generated for the kernel
> > > and try it with different compiler versions.
> >
> > Hard to know how to do it. If you can, then it would be interesting.
>
> I checked my kernel for "adc" at least (for the trylock/++ pattern)
> and couldn't find any (in fact all of them were in
> data the compiler thought to be code). That was not a allyesconfig build,
> so i missed some code.

sbb as well.


> For cmov it's at first harder because they're much more frequent
> and cmov to memory is a multiple instruction pattern (the instruction
> does only support memory source operands). Luckily gcc
> doesn't know the if (x) mem = a; => ptr = cmov(x, &a, &dummy); *ptr = a;
> transformation trick so I don't think there are actually any conditional
> stores on current x86.
>
> It might be a problem on other architectures which support true
> conditional stores though (like IA64 or ARM)

It might just depend on the instruction costs that gcc knows about.
For example, if ld/st is expensive, they might hoist them out of
loops etc. You don't even need to have one of these predicate or
pseudo predicate instructions.


> Also I'm not sure if gcc doesn't know any other tricks like the
> conditional add using carry, although I cannot think of any related
> to stores from the top of my hat.
>
> Anyways, if it's only conditional add if we ever catch such a case
> it could be also handled with inline assembly similar to local_inc()

But we don't actually know what it is, and it could change with
different architectures or versions of gcc. I think the sanest thing
is for gcc to help us out here, seeing as there is this very well
defined requirement that we want.

If you really still want the optimisation to occur, I don't think it
is too much to use a local variable for the accumulator (eg. in the
simple example case).
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ