[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 08 Aug 2007 03:49:00 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: krkumar2@...ibm.com
Cc: johnpol@....mipt.ru, sri@...ibm.com,
shemminger@...ux-foundation.org, kaber@...sh.net,
jagana@...ibm.com, Robert.Olsson@...a.slu.se, rick.jones2@...com,
herbert@...dor.apana.org.au, gaagaan@...il.com,
kumarkr@...ux.ibm.com, rdreier@...co.com,
peter.p.waskiewicz.jr@...el.com, mcarlson@...adcom.com,
jeff@...zik.org, general@...ts.openfabrics.org, mchan@...adcom.com,
tgraf@...g.ch, hadi@...erus.ca, netdev@...r.kernel.org,
xma@...ibm.com
Subject: Re: [PATCH 0/9 Rev3] Implement batching skb API and support in
IPoIB
From: Krishna Kumar <krkumar2@...ibm.com>
Date: Wed, 08 Aug 2007 15:01:14 +0530
> RESULTS: The performance improvement for TCP No Delay is in the range of -8%
> to 320% (with -8% being the sole negative), with many individual tests
> giving 50% or more improvement (I think it is to do with the hw slots
> getting full quicker resulting in more batching when the queue gets
> woken). The results for TCP is in the range of -11% to 93%, with most
> of the tests (8/12) giving improvements.
Not because I think it obviates your work, but rather because I'm
curious, could you test a TSO-in-hardware driver converted to
batching and see how TSO alone compares to batching for a pure
TCP workload?
I personally don't think it will help for that case at all as
TSO likely does better job of coalescing the work _and_ reducing
bus traffic as well as work in the TCP stack.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists