lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 11 Jun 2007 08:23:31 -0400
From:	jamal <hadi@...erus.ca>
To:	Patrick McHardy <kaber@...sh.net>
Cc:	"Waskiewicz Jr, Peter P" <peter.p.waskiewicz.jr@...el.com>,
	davem@...emloft.net, netdev@...r.kernel.org, jeff@...zik.org,
	"Kok, Auke-jan H" <auke-jan.h.kok@...el.com>
Subject: Re: [PATCH] NET: Multiqueue network device support.

On Mon, 2007-11-06 at 13:58 +0200, Patrick McHardy wrote:

> Thats not true. Assume PSL has lots of packets, PSH is empty. We
> fill the PHL queue until their is no room left, so the driver
> has to stop the queue. 

Sure. Packets stashed on the any DMA ring are considered "gone to the
wire". That is a very valid assumption to make.
 
> Now some PSH packets arrive, but the queue
> is stopped, no packets will be sent. 
> Now, you can argue that as
> soon as the first PHL packet is sent there is room for more and
> the queue will be activated again and we'll take PSH packets,

_exactly_ ;->

> so it doesn't matter because we can't send two packets at once
> anyway. Fine.

i can see your thought process building -
You are actually following what i am saying;->

>  Take three HW queues, prio 0-2. The prio 2 queue
> is entirely full, prio 1 has some packets queued and prio 0 is
> empty. Now, because prio 2 is completely full, the driver has to
> stop the queue. Before it can start it again it has to send all
> prio 1 packets and then at least one packet of prio 2. Until
> this happens, no packets can be queued to prio 0.

The assumption is packets gone to the DMA are gone to the wire, thats
it. 
If you have a strict prio scheduler, contention from the stack is only
valid if they both arrive at the same time.
If that happens then (assuming 0 is more important than 1 which is more
important than 2) then 0 always wins over 1 which wins over 2.
Same thing if you queue into hardware and the priorization is the same.

cheers,
jamal

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ