lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1181572815.4077.13.camel@localhost>
Date:	Mon, 11 Jun 2007 10:40:15 -0400
From:	jamal <hadi@...erus.ca>
To:	Patrick McHardy <kaber@...sh.net>
Cc:	"Waskiewicz Jr, Peter P" <peter.p.waskiewicz.jr@...el.com>,
	davem@...emloft.net, netdev@...r.kernel.org, jeff@...zik.org,
	"Kok, Auke-jan H" <auke-jan.h.kok@...el.com>
Subject: Re: [PATCH] NET: Multiqueue network device support.

On Mon, 2007-11-06 at 16:03 +0200, Patrick McHardy wrote:
> jamal wrote:

> > Sure - but what is wrong with that?
> 
> Nothing, this was just to illustrate why I disagree with the assumption
> that the packet has hit the wire. 

fair enough.

> On second thought I do agree with your
> assumption for the single HW queue case, at the point we hand the packet
> to the HW the packet order is determined and is unchangeable. But this
> is not the case if the hardware includes its own scheduler. The qdisc
> is simply not fully in charge anymore.

 i am making the case that it does not affect the overall results
as long as you use the same parameterization on qdisc and hardware.
If in fact the qdisc high prio packets made it to the driver before
the they make it out onto the wire, it is probably a good thing
that the hardware scheduler starves the low prio packets.

> Read again what I wrote about the n > 2 case. Low priority queues might
> starve high priority queues when using a single queue state for a
> maximum of the time it takes to service n - 2 queues with max_qlen - 1
> packets queued plus the time for a single packet. Thats assuming the
> worst case of n - 2 queues with max_qlen - 1 packets and the lowest
> priority queue full, so the queue is stopped until we can send at
> least one lowest priority packet, which requires to fully service
> all higher priority queues previously.

I didnt quiet follow the above - I will try retrieving reading your
other email to see if i can make sense of it. 

> Your basic assumption seems to be that the qdisc is still in charge
> of when packets get sent. This isn't the case if there is another
> scheduler after the qdisc and there is contention in the second
> queue.

My basic assumption is if you use the same scheduler in both the
hardware and qdisc, configured the same same number of queues and
mapped the same priorities then you dont need to make any changes
to the qdisc code. If i have a series of routers through which a packet
traveses to its destination with the same qos parameters i also achieve
the same results.

cheers,
jamal

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ