[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130322110600.GA5742@zion.uk.xensource.com>
Date: Fri, 22 Mar 2013 11:06:00 +0000
From: Wei Liu <wei.liu2@...rix.com>
To: James Harper <james.harper@...digoit.com.au>
CC: Wei Liu <liuw@...w.name>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"xen-devel@...ts.xen.org" <xen-devel@...ts.xen.org>,
"annie.li@...cle.com" <annie.li@...cle.com>,
Ian Campbell <Ian.Campbell@...rix.com>,
"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>
Subject: Re: [Xen-devel] [PATCH 4/4] xen-netback: coalesce slots before
copying
On Thu, Mar 21, 2013 at 10:14:17PM +0000, James Harper wrote:
> >
> >> Actually it turns out GPLPV just stops counting at 20. If I keep
> >> counting I can sometimes see over 1000 buffers per GSO packet under
> >> Windows using "iperf -
> >
> > Do you think it is necessary to increase MAX_SKB_SLOTS_DEFAULT to 21?
> >
>
> Doesn't really matter. Under windows you have to coalesce anyway and the number of cases where the skb count is 20 or 21 is very small so there will be negligible gain and it will break guests that can't handle more than 19.
It's not about performance, it's about usability. If frontend uses more
slots than backend allows it to, it gets disconnected. In case we don't
push the wrong value upstream, it is important to know whether 20 is
enough for Windows PV driver.
>
> Has anyone done the benchmarks on if memcpy to coalesce is better or worse than consuming additional ring slots? Probably OT here but I'm talking about packets that might have 19 buffers but could fit on a page or two of coalesced.
>
After this changeset number of grant copy operations is greater or equal
to number of slots. I run iperf as my functional test, I also notice
the result is within the same range before this change.
And a future improvement would be using compound page for backend, which
can make number of grant copy ops more or less equal to number of slots
used.
Wei.
> James
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists