lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20120225015122.a7419f74.billfink@mindspring.com>
Date:	Sat, 25 Feb 2012 01:51:22 -0500
From:	Bill Fink <billfink@...dspring.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Yevgeny Petrilin <yevgenyp@...lanox.com>,
	David Miller <davem@...emloft.net>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next 1/3] mlx4_en: TX ring size default to 1024

On Fri, 24 Feb 2012, Eric Dumazet wrote:

> Le vendredi 24 février 2012 à 19:35 +0000, Yevgeny Petrilin a écrit :
> > > > Signed-off-by: Yevgeny Petrilin <yevgenyp@...lanox.co.il>
> > > 
> > > This is rediculious as a default, yes even for 10Gb.
> > > 
> > > Do you have any idea how high latency is going to be for packets
> > > trying to get into the transmit queue if there are already a
> > > thousand other frames in there?

For a GigE NIC with a typical ring size of 256, the serialization delay
for 256 1500 byte packets is:

	1500*8*256/10^9 = ~3.1 msec

For a 10-GigE NIC with a ring size of 1024, the serialization delay
for 1024 1500 byte packets is:

	1500*8*1024/10^10 = ~1.2 msec

So it's not immediately clear that a ring size of 1024 is unreasonable
for 10-GigE.

It probably boils down to whether the default setting should
be biased more toward low latency applications or high throughput
bulk data applications.  Determining the best happy medium is
best decided by appropriate benchmark testing.  Of course,
anyone can change the settings to suit their purpose, so it's
really just a question of what's best for the "usual" case.

> > On the other hand, when having smaller queue with 1000 in-flight packets would mean queue would be stopped,
> > how is it better?
> 
> Its better because you can have any kind of Qdisc setup to properly
> classify packets, with 100.000 total packets in queues if you wish.

Not everyone wants to deal with the convoluted, arcane, and poorly
documented qdisc machinery, especially with its current limitations
at 10-GigE (or faster) line rates.

> TX ring is a single FIFO, and that is just horrible, especially with big packets...
> 
> > Having bigger TX ring helps dealing better with bursts of TX packets, without the overhead of stopping and starting the queue,
> > It also makes sense to have same size TX and RX queues, for example in case of traffic being forwarded from TX to RX.
> 
> Really I doubt people using forwarding setups use default qdiscs.

I don't think it's necessarily that uncommon, such as a simple
10-GigE firewall setup.

> Instead of bigger TX rings, they need appropriate Qdiscs.
> 
> > I did find number of 10Gb vendors that have 1024 or more as the default size for TX queue.
> 
> Thats a shame.

						-Bill
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ