[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6035A0D088A63A46850C3988ED045A4B3880B4F4@BITCOM1.int.sbss.com.au>
Date: Tue, 26 Mar 2013 11:00:43 +0000
From: James Harper <james.harper@...digoit.com.au>
To: Paul Durrant <Paul.Durrant@...rix.com>,
Wei Liu <wei.liu2@...rix.com>,
David Vrabel <david.vrabel@...rix.com>
CC: Ian Campbell <Ian.Campbell@...rix.com>, Wei Liu <liuw@...w.name>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
"xen-devel@...ts.xen.org" <xen-devel@...ts.xen.org>,
"annie.li@...cle.com" <annie.li@...cle.com>
Subject: RE: [Xen-devel] [PATCH 5/6] xen-netback: coalesce slots before
copying
> > Because the check is >= MAX_SKB_FRAGS originally and James Harper told
> > me that "Windows stops counting on 20".
> >
>
> For the Citrix PV drivers I lifted the #define of MAX_SKB_FRAGS from the
> dom0 kernel (i.e. 18). If a packet coming from the stack has more than that
> number of fragments then it's copied and coalesced. The value advertised
> for TSO size is chosen such that a maximally sized TSO will always fit in 18
> fragments after coalescing but (since this is Windows) the drivers don't trust
> the stack to stick to that limit and will drop a packet if it won't fit.
>
> It seems reasonable that, since the backend is copying anyway, that it should
> handle any fragment list coming from the frontend that it can. This would
> allow the copy-and-coalesce code to be removed from the frontend (and the
> double-copy avoided). If there is a maximum backend packet size though
> then I think this needs to be advertised to the frontend. The backend should
> clearly bin packets coming from the frontend that exceed that limit but
> advertising that limit in xenstore allows the frontend to choose the right TSO
> maximum size to advertise to its stack, rather than having to make it based
> on some historical value that actually has little meaning (in the absence of
> grant mapping).
>
As stated previously, I've observed windows issuing staggering numbers of buffers to NDIS miniport drivers, so you will need to coalesce in a windows driver anyway. I'm not sure what the break even point is but I think it's safe to say that in the choice between using 1000 (worst case) ring slots (with the resulting mapping overheads) and coalescing in the frontend, coalescing is going to be the better option.
James
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists