lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <6035A0D088A63A46850C3988ED045A4B387FB199@BITCOM1.int.sbss.com.au>
Date:	Thu, 21 Mar 2013 22:14:17 +0000
From:	James Harper <james.harper@...digoit.com.au>
To:	Wei Liu <liuw@...w.name>
CC:	Wei Liu <wei.liu2@...rix.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"xen-devel@...ts.xen.org" <xen-devel@...ts.xen.org>,
	"annie.li@...cle.com" <annie.li@...cle.com>,
	"ian.campbell@...rix.com" <ian.campbell@...rix.com>,
	"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>
Subject: RE: [Xen-devel] [PATCH 4/4] xen-netback: coalesce slots before
 copying

> 
>> Actually it turns out GPLPV just stops counting at 20. If I keep
>> counting I can sometimes see over 1000 buffers per GSO packet under
>> Windows using "iperf -
> 
> Do you think it is necessary to increase MAX_SKB_SLOTS_DEFAULT to 21?
> 

Doesn't really matter. Under windows you have to coalesce anyway and the number of cases where the skb count is 20 or 21 is very small so there will be negligible gain and it will break guests that can't handle more than 19.

Has anyone done the benchmarks on if memcpy to coalesce is better or worse than consuming additional ring slots? Probably OT here but I'm talking about packets that might have 19 buffers but could fit on a page or two of coalesced.

James

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ