lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 28 Jul 2008 16:16:31 -0700 From: Jeremy Fitzhardinge <jeremy@...p.org> To: Ingo Molnar <mingo@...e.hu> CC: Jens Axboe <jens.axboe@...cle.com>, Andi Kleen <ak@...e.de>, Linux Kernel Mailing List <linux-kernel@...r.kernel.org> Subject: x86: Is there still value in having a special tlb flush IPI vector? Now that normal smp_function_call is no longer an enormous bottleneck, is there still value in having a specialised IPI vector for tlb flushes? It seems like quite a lot of duplicate code. The 64-bit tlb flush multiplexes the various cpus across 8 vectors to increase scalability. If this is a big issue, then the smp function call code can (and should) do the same thing. (Though looking at it more closely, the way the code uses the 8 vectors is actually a less general way of doing what smp_call_function is doing anyway.) Thoughts? (And uv should definitely be hooking pvops if it wants its own flush_tlb_others; vsmp sets the precedent for a subarch-like use of pvops.) J -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists