[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AE90C24D6B3A694183C094C60CF0A2F6026B71B6@saturn3.aculab.com>
Date: Tue, 26 Mar 2013 11:27:48 -0000
From: "David Laight" <David.Laight@...LAB.COM>
To: "James Harper" <james.harper@...digoit.com.au>,
"Paul Durrant" <Paul.Durrant@...rix.com>,
"Wei Liu" <wei.liu2@...rix.com>,
"David Vrabel" <david.vrabel@...rix.com>
Cc: "Ian Campbell" <Ian.Campbell@...rix.com>,
"Wei Liu" <liuw@...w.name>, <netdev@...r.kernel.org>,
<konrad.wilk@...cle.com>, <xen-devel@...ts.xen.org>,
<annie.li@...cle.com>
Subject: RE: [Xen-devel] [PATCH 5/6] xen-netback: coalesce slots before copying
> As stated previously, I've observed windows issuing staggering
> numbers of buffers to NDIS miniport drivers, so you will need
> to coalesce in a windows driver anyway. I'm not sure what the
> break even point is but I think it's safe to say that in the
> choice between using 1000 (worst case) ring slots (with the
> resulting mapping overheads) and coalescing in the frontend,
> coalescing is going to be the better option.
A long time ago we did some calculation on a sparc mbus/sbus
system (that has an iommu requiring setup for dma) and got
a breakeven point of (about) 1k.
(And I'm not sure we arrange to do aligned copies.)
Clearly that isn't directly relevant here...
It is even likely that the ethernet chips will underrun
if requested to do too many ring operations - especially
at their maximum speed.
I guess none of the modern ones require the first fragment
to be at least 100 bytes in order to guarantee retransmission
after a collision.
David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists