lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 11 Jun 2007 16:03:15 +0200 From: Patrick McHardy <kaber@...sh.net> To: hadi@...erus.ca CC: "Waskiewicz Jr, Peter P" <peter.p.waskiewicz.jr@...el.com>, davem@...emloft.net, netdev@...r.kernel.org, jeff@...zik.org, "Kok, Auke-jan H" <auke-jan.h.kok@...el.com> Subject: Re: [PATCH] NET: Multiqueue network device support. jamal wrote: > On Mon, 2007-11-06 at 15:03 +0200, Patrick McHardy wrote: > >>>Take a step back: >>>When you put a packet on the DMA ring, are you ever going to take it >>>away at some point before it goes to the wire? >> >> >>No, but its nevertheless not on the wire yet and the HW scheduler >>controls when it will get there. >> >>It might in theory even never get >>there if higher priority queues are continously active. > > > Sure - but what is wrong with that? Nothing, this was just to illustrate why I disagree with the assumption that the packet has hit the wire. On second thought I do agree with your assumption for the single HW queue case, at the point we hand the packet to the HW the packet order is determined and is unchangeable. But this is not the case if the hardware includes its own scheduler. The qdisc is simply not fully in charge anymore. > What would be wrong is in the case of contention for a resource like a > wire between a less important packet and a more important packet, the > more important packet gets favored. Read again what I wrote about the n > 2 case. Low priority queues might starve high priority queues when using a single queue state for a maximum of the time it takes to service n - 2 queues with max_qlen - 1 packets queued plus the time for a single packet. Thats assuming the worst case of n - 2 queues with max_qlen - 1 packets and the lowest priority queue full, so the queue is stopped until we can send at least one lowest priority packet, which requires to fully service all higher priority queues previously. > Nothing like that ever happens in what i described. > Remember there is no issue if there is no congestion or contention for > local resources. Your basic assumption seems to be that the qdisc is still in charge of when packets get sent. This isn't the case if there is another scheduler after the qdisc and there is contention in the second queue. - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists