lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 2 Oct 2007 00:25:02 -0400
From:	Bill Fink <billfink@...dspring.com>
To:	hadi@...erus.ca
Cc:	David Miller <davem@...emloft.net>, krkumar2@...ibm.com,
	johnpol@....mipt.ru, herbert@...dor.apana.org.au, kaber@...sh.net,
	shemminger@...ux-foundation.org, jagana@...ibm.com,
	Robert.Olsson@...a.slu.se, rick.jones2@...com, xma@...ibm.com,
	gaagaan@...il.com, netdev@...r.kernel.org, rdreier@...co.com,
	peter.p.waskiewicz.jr@...el.com, mcarlson@...adcom.com,
	jeff@...zik.org, mchan@...adcom.com, general@...ts.openfabrics.org,
	kumarkr@...ux.ibm.com, tgraf@...g.ch, randy.dunlap@...cle.com,
	sri@...ibm.com
Subject: Re: [PATCH 2/3][NET_BATCH] net core use batching

On Mon, 01 Oct 2007, jamal wrote:

> On Mon, 2007-01-10 at 00:11 -0400, Bill Fink wrote:
> 
> > Have you done performance comparisons for the case of using 9000-byte
> > jumbo frames?
> 
> I havent, but will try if any of the gige cards i have support it.
> 
> As a side note: I have not seen any useful gains or losses as the packet
> size approaches even 1500B MTU. For example, post about 256B neither the
> batching nor the non-batching give much difference in either throughput
> or cpu use. Below 256B, theres a noticeable gain for batching.
> Note, in the cases of my tests all 4 CPUs are in full-throttle UDP and
> so the occupancy of both the qdisc queue(s) and ethernet ring is
> constantly high. For example at 512B, the app is 80% idle on all 4 CPUs
> and we are hitting in the range of wire speed. We are at 90% idle at
> 1024B. This is the case with or without batching.  So my suspicion is
> that with that trend a 9000B packet will just follow the same pattern.

One reason I ask, is that on an earlier set of alternative batching
xmit patches by Krishna Kumar, his performance testing showed a 30 %
performance hit for TCP for a single process and a size of 4 KB, and
a performance hit of 5 % for a single process and a size of 16 KB
(a size of 8 KB wasn't tested).  Unfortunately I was too busy at the
time to inquire further about it, but it would be a major potential
concern for me in my 10-GigE network testing with 9000-byte jumbo
frames.  Of course the single process and 4 KB or larger size was
the only case that showed a significant performance hit in Krishna
Kumar's latest reported test results, so it might be acceptable to
just have a switch to disable the batching feature for that specific
usage scenario.  So it would be useful to know if your xmit batching
changes would have similar issues.

Also for your xmit batching changes, I think it would be good to see
performance comparisons for TCP and IP forwarding in addition to your
UDP pktgen tests, including various packet sizes up to and including
9000-byte jumbo frames.

						-Bill
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ