lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <46C64FD6.6030902@intel.com> Date: Fri, 17 Aug 2007 18:48:06 -0700 From: "Kok, Auke" <auke-jan.h.kok@...el.com> To: David Miller <davem@...emloft.net> CC: auke-jan.h.kok@...el.com, shemminger@...ux-foundation.org, netdev@...r.kernel.org Subject: Re: [RFC] restore netdev_priv optimization (planb) David Miller wrote: > From: "Kok, Auke" <auke-jan.h.kok@...el.com> > Date: Fri, 17 Aug 2007 18:21:25 -0700 > >> this sounds highly optimistic ("64 queues is enough for everyone"?) >> and probably will be quickly outdated by both hardware and demand... > > As such drivers appear in the tree we can adjust the value. > > Even the most aggressively multi-queued virtualization and 10GB > ethernet chips I am aware of, both in production and in development, > do not exceed this limit. > > Since you think this is worth complaining about, you must know of some > exceptions? :-) I actually don't, but I assume that demand for queues will quickly increase once the feature becomes available. e.g. in e1000 hardware (pci-e) we support 2, and this is really old hardware already (laugh), 82575 has 4. the ixgbe driver me and Ayyappan posted already supports 64 rx queues and 32 tx... I can only expect the next generation to take a bigger jump and implement at least 128 queues... :) Auke - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists