[<prev] [next>] [day] [month] [year] [list]
Message-Id: <1191793743.4352.13.camel@localhost>
Date: Sun, 07 Oct 2007 17:49:03 -0400
From: jamal <hadi@...erus.ca>
To: David Miller <davem@...emloft.net>
Cc: krkumar2@...ibm.com, johnpol@....mipt.ru,
herbert@...dor.apana.org.au, kaber@...sh.net,
shemminger@...ux-foundation.org, jagana@...ibm.com,
Robert.Olsson@...a.slu.se, rick.jones2@...com, xma@...ibm.com,
gaagaan@...il.com, netdev@...r.kernel.org, rdreier@...co.com,
peter.p.waskiewicz.jr@...el.com, mcarlson@...adcom.com,
jeff@...zik.org, mchan@...adcom.com, general@...ts.openfabrics.org,
kumarkr@...ux.ibm.com, tgraf@...g.ch, randy.dunlap@...cle.com,
sri@...ibm.com
Subject: NET_BATCH: some results
It seems prettier to just draw graphs and since this one is small file;
here it is attached. The graph demos a patched net-2.6.24 vs a plain
net-2.6.24 kernel with a udp app that sends on 4 CPUs as fast as the the
lower layers would allow it.
Refer to my earlier description of the test setup etc.
As i noted earlier on, for this hardware at about 200B or so, we
approach wire speed, so the app is mostly idle above that as the link
becomes the bottleneck; example it is > 85% idle on 512B and > 90% idle
on 1024B. This is so for either batch or non-batch. So the
differentiation is really in the smaller sized packets.
Enjoy!
cheers,
jamal
Download attachment "batch-pps.pdf" of type "application/pdf" (12238 bytes)
Powered by blists - more mailing lists