lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 2 Aug 2009 11:20:56 +1000 From: Paul Mackerras <paulus@...ba.org> To: Linus Torvalds <torvalds@...ux-foundation.org> Cc: "H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...e.hu>, Thomas Gleixner <tglx@...utronix.de>, Linux Kernel Mailing List <linux-kernel@...r.kernel.org>, Tejun Heo <tj@...nel.org> Subject: Re: [GIT PULL] Additional x86 fixes for 2.6.31-rc5 Linus Torvalds writes: > On Sat, 1 Aug 2009, H. Peter Anvin wrote: > > In particular, if I remember right the problem with using __thread for > > percpu was exactly that the current cpuness can change almost anywhere, > > unless preemption is disabled. > > That shouldn't matter. If it uses '%gs', it should all just work > automatically. But if gcc does something different for thread-local, it's > basically useless. When I tried using __thread for per-cpu variables on ppc64, I found that gcc would sometimes precompute and cache the address of a per-cpu variable even though it could always access the variable using an offset from r13. The cached address was wrong if the task got moved to a different cpu, of course, but there was no way to tell gcc that. Compiler barriers don't help because they say that the contents of variables might have changed, but not their addresses. So on x86 the concern would be that gcc might do lea %gs:foo,%rbx and then use (%rbx) to refer to foo later on. It would be possible to use __thread for per-task variables rather than having to put all per-task things in the task_struct, but __thread doesn't work for per-cpu variables in my experience. Paul. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists