lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20140914140639.GO5387@tassilo.jf.intel.com> Date: Sun, 14 Sep 2014 07:06:39 -0700 From: Andi Kleen <ak@...ux.intel.com> To: Konstantin Khlebnikov <koct9i@...il.com> Cc: x86@...nel.org, linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Dmitry Vyukov <dvyukov@...gle.com>, "H. Peter Anvin" <hpa@...or.com> Subject: Re: [PATCH RFC] x86_64: per-cpu memory for user-space On Sat, Sep 13, 2014 at 06:35:34PM +0400, Konstantin Khlebnikov wrote: > This patch implements user-space per-cpu memory in the same manner as in > kernel-space: each cpu has its own %gs base address. On x86_64 %fs is used > for thread local storage, %gs usually is free. > > User-space application cannot prevent preemption but x86 read-modify-write > operations are atomic against interrupts and context switches. Thus percpu > counters, ring-buffer cursors, per-cpu locks and other cool things might > be implemented in a very efficient way. Do you have some concrete examples for the more complex operations? It seems to me the limitation to a simple instruction will be very limiting for anything more complicated than a counter. Also it's not even clear how someone would implement retry (short of something like kuchannel) Of course it wouldn't be a problem with TSX transactions, but it's not clear they need it. The other problem with the approach is, how would cpu hotplug be handled? > By the way, newer Intel cpus have even faster instructions for > changing %fs/%gs, but they are still not supported by the kernel. Patch kits are pending. -Andi -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists