lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <16360.1280338676@death>
Date:	Wed, 28 Jul 2010 10:37:56 -0700
From:	Jay Vosburgh <fubar@...ibm.com>
To:	Simon Horman <horms@...ge.net.au>
cc:	netdev@...r.kernel.org
Subject: Re: noqueue on bonding devices

Simon Horman <horms@...ge.net.au> wrote:

>Hi Jay, Hi All,
>
>I would just to wonder out loud if it is intentional that bonding
>devices default to noqueue, whereas for instance ethernet devices
>default to a pfifo_fast with qlen 1000.

	Yes, it is.

>The reason that I ask, is that when setting up some bandwidth
>control using tc I encountered some strange behaviour which
>I eventually tracked down to the queue-length of the qdiscs being 1p -
>inherited from noqueue, as opposed to 1000p which would occur
>on an ethernet device.
>
>Its trivial to work around, by either altering the txqueuelen on
>the bonding device before adding the qdisc or by manually setting
>the qlen of the qdisc. But it did take us a while to determine the
>cause of the problem we were seeing. And as it seems inconsistent
>I'm interested to know why this is the case.

	Software-only virtual devices (loopback, bonding, bridge, vlan,
etc) typically have no transmit queue because, well, the device does no
queueing.  Meaning that there is no flow control infrastructure in the
software device; bonding, et al, won't ever flow control (call
netif_stop_queue to temporarily suspend transmit) or accumulate packets
on a transmit queue.

	Hardware ethernet devices set a queue length because it is
meaningful for them to do so.  When their hardware transmit ring fills
up, they will assert flow control, and stop accepting new packets for
transmit.  Packets then accumulate in the software transmit queue, and
when the device unblocks, those packets are ready to go.  When under
continuous load, hardware network devices typically free up ring entries
in blocks (not one at a time), so the software transmit queue helps to
smooth out the chunkiness of the hardware driver's processing, minimize
dropped packets, etc.

	It's certainly possible to add a queue and qdisc to a bonding
device, and is reasonable to do if you want to do packet scheduling with
tc and friends.  In this case, the queue is really just for the tc
actions to connect to; the queue won't accumulate packets on account of
the driver (but could if the scheduler, e.g., rate limits).

>On an unrelated note, MAINTANERS lists bonding-devel@...ts.sourceforge.net
>but the (recent) archives seem to be entirely spam.  Is the MAINTAINERS
>file correct?

	Yah, I should probably change that; the spam is pretty heavy,
and there isn't much I can do to limit it.

	-J

---
	-Jay Vosburgh, IBM Linux Technology Center, fubar@...ibm.com
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ