lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51C22487.4080505@hp.com>
Date:	Wed, 19 Jun 2013 14:37:11 -0700
From:	Rick Jones <rick.jones2@...com>
To:	Jerry Chu <hkchu@...gle.com>
CC:	Jason Wang <jasowang@...hat.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"Michael S. Tsirkin" <mst@...hat.com>
Subject: Re: qlen check in tun.c

On 06/19/2013 01:42 PM, Jerry Chu wrote:
> On Wed, Jun 19, 2013 at 12:49 PM, Rick Jones <rick.jones2@...com> wrote:
>> Assuming this single-stream is a netperf test, what happens when you cap the
>> socket buffers to 724000 bytes?  Put another way, is this simply a situation
>> where the autotuning of the socket buffers/window is taking a connection
>> somewhere it shouldn't go?
>
> You have a good point - for single netperf streaming the TCP window seems to
> grow much larger than necessary. Manually capping socket buffer seems to make
> the problem go away without hurting throughput - but only to some extent.
> Unfortunately manual setting is undesirable, and the autotuning code
> is difficult to "tune".
>

...

>> Just what is the bandwidthXdelay product through the openvswitch?
>
> Unlike the traditional NIC, for tuntap it'd be CPU b/w times scheduling delay.
> Both can have a large variance. I haven't figured out how to right size the
> qlen in this scenario.

In perhaps overly broad, handwaving terms, doesn't wireless have a 
similar problem with highly variable latency/delay?

In theory, if your max scheduling delay is 10 milliseconds, your still 
not large enough 8192 entry queue should still get you 1 GB/s assuming 
that it is >> 1 GB/s between scheduling delays.   Is there really an 
expectation/requirement to get better than that or have even larger 
scheduling delays?  The existence of 40 and 100 GbE (even bonded 10GbE) 
notwithstanding, once one is talking about 1 GB/s one is looking more at 
SR-IOV I would think, not going through an Openvswitch.

Do you actually still see single-stream drops at 8192?  That should be 
something like 11 MB of queuing - I don't think I've seen tcp_[wr]mem go 
above 6 MB thusfar...

rick

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ