lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 08 Oct 2007 09:34:50 -0400 From: jamal <hadi@...erus.ca> To: David Miller <davem@...emloft.net> Cc: peter.p.waskiewicz.jr@...el.com, krkumar2@...ibm.com, johnpol@....mipt.ru, herbert@...dor.apana.org.au, kaber@...sh.net, shemminger@...ux-foundation.org, jagana@...ibm.com, Robert.Olsson@...a.slu.se, rick.jones2@...com, xma@...ibm.com, gaagaan@...il.com, netdev@...r.kernel.org, rdreier@...co.com, mcarlson@...adcom.com, jeff@...zik.org, mchan@...adcom.com, general@...ts.openfabrics.org, kumarkr@...ux.ibm.com, tgraf@...g.ch, randy.dunlap@...cle.com, sri@...ibm.com Subject: Re: [PATCH 1/4] [NET_SCHED] explict hold dev tx lock On Sun, 2007-07-10 at 21:51 -0700, David Miller wrote: > For these high performance 10Gbit cards it's a load balancing > function, really, as all of the transmit queues go out to the same > physical port so you could: > > 1) Load balance on CPU number. > 2) Load balance on "flow" > 3) Load balance on destination MAC > > etc. etc. etc. The brain-block i am having is the parallelization aspect of it. Whatever scheme it is - it needs to ensure the scheduler works as expected. For example, if it was a strict prio scheduler i would expect that whatever goes out is always high priority first and never ever allow a low prio packet out at any time theres something high prio needing to go out. If i have the two priorities running on two cpus, then i cant guarantee that effect. IOW, i see the scheduler/qdisc level as not being split across parallel cpus. Do i make any sense? The rest of my understanding hinges on the above, so let me stop here. cheers, jamal - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists