lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 28 Aug 2008 13:56:09 -0700 (PDT) From: David Miller <davem@...emloft.net> To: Brice.Goglin@...ia.fr Cc: linux-kernel@...r.kernel.org, netdev@...r.kernel.org Subject: Re: [RFC] export irq_set/get_affinity() for multiqueue network drivers From: Brice Goglin <Brice.Goglin@...ia.fr> Date: Thu, 28 Aug 2008 22:21:53 +0200 > With more and more drivers using multiqueues, I think we need a nice way > to bind MSI-X from within the drivers. I am not sure what's best, the > attached (untested) patch would just export the existing > irq_set_affinity() and add irq_get_affinity(). Comments? I think we should rather have some kind of generic thing in the IRQ layer that allows specifying the usage model of the device's interrupts, so that the IRQ layer can choose a default affinities. I never notice any of this complete insanity on sparc64 because we flat spread out all of the interrupts across the machine. What we don't want it drivers choosing IRQ affinity settings, they have no idea about NUMA topology, what NUMA node the PCI controller sits behind, what cpus are there, etc. and without that kind of knowledge you cannot possible make affinity decisions properly. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists