[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070824.142503.30177455.davem@davemloft.net>
Date: Fri, 24 Aug 2007 14:25:03 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: hadi@...erus.ca
Cc: billfink@...dspring.com, rick.jones2@...com, krkumar2@...ibm.com,
gaagaan@...il.com, general@...ts.openfabrics.org,
herbert@...dor.apana.org.au, jagana@...ibm.com, jeff@...zik.org,
johnpol@....mipt.ru, kaber@...sh.net, mcarlson@...adcom.com,
mchan@...adcom.com, netdev@...r.kernel.org,
peter.p.waskiewicz.jr@...el.com, rdreier@...co.com,
Robert.Olsson@...a.slu.se, shemminger@...ux-foundation.org,
sri@...ibm.com, tgraf@...g.ch, xma@...ibm.com
Subject: Re: [PATCH 0/9 Rev3] Implement batching skb API and support in
IPoIB
From: jamal <hadi@...erus.ca>
Date: Fri, 24 Aug 2007 08:14:16 -0400
> Seems the receive side of the sender is also consuming a lot more cpu
> i suspect because receiver is generating a lot more ACKs with TSO.
I've seen this behavior before on a low cpu powered receiver and the
issue is that batching too much actually hurts a receiver.
If the data packets were better spaced out, the receive would handle
the load better.
This is the thing the TOE guys keep talking about overcoming with
their packet pacing algorithms in their on-card TOE stack.
My hunch is that even if in the non-TSO case the TX packets were all
back to back in the cards TX ring, TSO still spits them out faster on
the wire.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists