lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <063D6719AE5E284EB5DD2968C1650D6D0F6D7699@AcuExch.aculab.com>
Date:	Fri, 7 Mar 2014 14:35:35 +0000
From:	David Laight <David.Laight@...LAB.COM>
To:	'Eric Dumazet' <eric.dumazet@...il.com>
CC:	'Neal Cardwell' <ncardwell@...gle.com>,
	Rick Jones <rick.jones2@...com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: Can I limit the number of active tx per TCP socket?

From: Eric Dumazet 
> On Fri, 2014-03-07 at 12:29 +0000, David Laight wrote:
> 
> > I'll probably look at delaying the sends within our own code.
> 
> That would be bad.

The sending code can be told whether the packets are control
(which want sending immediately) or data (which can be delayed
based on knowledge of how much data has been sent recently).

I also probably ought to make this work on the window's version
of our code - but most of the high throughput systems are linux.
The overheads through the M$ IP stack are horrid.

> Just use fq/pacing, this is way better. Its designed for this usage.
> trick is to reduce 'quantum' as you mtu is 273+14 bytes.
> 
> QUANTUM=$((273+14))
> tc qdisc replace dev eth0 root fq quantum $QUANTUM initial_quantum $QUANTUM
> 
> This will _perfectly_ pace packets every 34 ms.

Unfortunately that isn't what I need to do.
The 64k links run a reliable protocol and we use application level
flow control to limit the number of packets being sent.

So not all the data on the tcp connection is destined to be sent
over the slow link(s).

Everything works fine - except that I'd like the traffic to fill
ethernet packets under heavy load.
If I could set the Nagle timeout to 1-2ms (on a per-socket basis)
I could enable Nagle and that would probably suffice.

> If you share your ethernet device with this 64k destination, and other
> uses, then you need a more complex setup with HTB plus two classes, and
> fq running at the htb leaf

Only the one connection between the two IP addresses is carrying this data.
Other connections carry other traffic that has entirely different
characteristics.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ