[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1191245440.4378.12.camel@localhost>
Date: Mon, 01 Oct 2007 09:30:40 -0400
From: jamal <hadi@...erus.ca>
To: Bill Fink <billfink@...dspring.com>
Cc: David Miller <davem@...emloft.net>, krkumar2@...ibm.com,
johnpol@....mipt.ru, herbert@...dor.apana.org.au, kaber@...sh.net,
shemminger@...ux-foundation.org, jagana@...ibm.com,
Robert.Olsson@...a.slu.se, rick.jones2@...com, xma@...ibm.com,
gaagaan@...il.com, netdev@...r.kernel.org, rdreier@...co.com,
peter.p.waskiewicz.jr@...el.com, mcarlson@...adcom.com,
jeff@...zik.org, mchan@...adcom.com, general@...ts.openfabrics.org,
kumarkr@...ux.ibm.com, tgraf@...g.ch, randy.dunlap@...cle.com,
sri@...ibm.com
Subject: Re: [PATCH 2/3][NET_BATCH] net core use batching
On Mon, 2007-01-10 at 00:11 -0400, Bill Fink wrote:
> Have you done performance comparisons for the case of using 9000-byte
> jumbo frames?
I havent, but will try if any of the gige cards i have support it.
As a side note: I have not seen any useful gains or losses as the packet
size approaches even 1500B MTU. For example, post about 256B neither the
batching nor the non-batching give much difference in either throughput
or cpu use. Below 256B, theres a noticeable gain for batching.
Note, in the cases of my tests all 4 CPUs are in full-throttle UDP and
so the occupancy of both the qdisc queue(s) and ethernet ring is
constantly high. For example at 512B, the app is 80% idle on all 4 CPUs
and we are hitting in the range of wire speed. We are at 90% idle at
1024B. This is the case with or without batching. So my suspicion is
that with that trend a 9000B packet will just follow the same pattern.
cheers,
jamal
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists