[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <OF37602E5B.BA36F154-ON6525731E.0028A4E9-6525731E.00293CC3@in.ibm.com>
Date: Fri, 20 Jul 2007 13:00:25 +0530
From: Krishna Kumar2 <krkumar2@...ibm.com>
To: Stephen Hemminger <shemminger@...ux-foundation.org>
Cc: davem@...emloft.net, gaagaan@...il.com,
general@...ts.openfabrics.org, hadi@...erus.ca,
herbert@...dor.apana.org.au, jagana@...ibm.com, jeff@...zik.org,
johnpol@....mipt.ru, kaber@...sh.net, kumarkr@...ux.ibm.com,
mcarlson@...adcom.com, mchan@...adcom.com, netdev@...r.kernel.org,
peter.p.waskiewicz.jr@...el.com, rdreier@...co.com,
rick.jones2@...com, Robert.Olsson@...a.slu.se, sri@...ibm.com,
tgraf@...g.ch, xma@...ibm.com
Subject: Re: [PATCH 00/10] Implement batching skb API
Stephen Hemminger <shemminger@...ux-foundation.org> wrote on 07/20/2007
12:48:48 PM:
> You may see worse performance with batching in the real world when
> running over WAN's. Like TSO, batching will generate back to back packet
> trains that are subject to multi-packet synchronized loss. The problem is
that
> intermediate router queues are often close to full, and when a long
string
> of packets arrives back to back only the first ones will get in, the rest
> get dropped. Normal sends have at least minimal pacing so they are less
> likely do get synchronized drop.
Hi Stephen,
OK. The difference that I could see is that in existing code, the "minimal
pacing" also could lead to (possibly slighly lesser) loss since sends are
quick iterations at the IP layer, while in batching sends are iterative at
the driver layer.
Is it an issue ? Any suggestions ?
Thanks,
- KK
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists