lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 22 Jul 2007 19:03:01 +0200
From:	Patrick McHardy <kaber@...sh.net>
To:	Krishna Kumar2 <krkumar2@...ibm.com>
CC:	jagana@...ibm.com, johnpol@....mipt.ru,
	herbert@...dor.apana.org.au, gaagaan@...il.com,
	Robert.Olsson@...a.slu.se, kumarkr@...ux.ibm.com,
	rdreier@...co.com, peter.p.waskiewicz.jr@...el.com,
	hadi@...erus.ca, mcarlson@...adcom.com, jeff@...zik.org,
	general@...ts.openfabrics.org, mchan@...adcom.com, tgraf@...g.ch,
	netdev@...r.kernel.org, davem@...emloft.net, sri@...ibm.com
Subject: Re: [ofa-general] Re: [PATCH 05/10] sch_generic.c changes.

Krishna Kumar2 wrote:
> Patrick McHardy <kaber@...sh.net> wrote on 07/20/2007 11:46:36 PM:
> 
>>The check for tx_queue_len is wrong though,
>>its only a default which can be overriden and some qdiscs don't
>>care for it at all.
> 
> 
> I think it should not matter whether qdiscs use this or not, or even if it
> is modified (unless it is made zero in which case this breaks). The
> intention behind this check is to make sure that not more than tx_queue_len
> skbs are in all queues put together (q->qdisc + dev->skb_blist), otherwise
> the blist can become too large and breaks the idea of tx_queue_len. Is that
> a good justification ?


Its a good justification, but on second thought the entire idea of
a single queue after qdiscs that is refilled independantly of
transmissions times etc. make be worry a bit. By changing the
timing you're effectively changing the qdiscs behaviour, at least
in some cases. SFQ is a good example, but I believe it affects most
work-conserving qdiscs. Think of this situation:

100 packets of flow 1 arrive
50 packets of flow 1 are sent
100 packets for flow 2 arrive
remaining packets are sent

On the wire you'll first see 50 packets of flow 1, than 100 packets
alternate of flow 1 and 2, then 50 packets flow 2.

With your additional queue all packets of flow 1 are pulled out of
the qdisc immediately and put in the fifo. When the 100 packets of
the second flow arrive they will also get pulled out immediately
and are put in the fifo behind the remaining 50 packets of flow 1.
So what you get on the wire is:

100 packets of flow 1
100 packets of flow 1

So SFQ is without any effect. This is not completely avoidable of
course, but you can and should limit the damage by only pulling
out as much packets as the driver can take and have the driver
stop the queue afterwards.

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ