lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK3Ji116nZ79csrq9GJ7C5e3zLPJina1r2+VasdVRsdY7n+3ww@mail.gmail.com>
Date:	Fri, 19 Oct 2012 17:51:24 -0700
From:	Vimal <j.vimal@...il.com>
To:	Rick Jones <rick.jones2@...com>
Cc:	davem@...emloft.net, eric.dumazet@...il.com,
	Jamal Hadi Salim <jhs@...atatu.com>, netdev@...r.kernel.org
Subject: Re: [PATCH] htb: improved accuracy at high rates

On 19 October 2012 16:52, Rick Jones <rick.jones2@...com> wrote:
>
> First some netperf/operational kinds of questions:
>
> Did it really take 20 concurrent netperf UDP_STREAM tests to get to those
> rates?  And why UDP_STREAM rather than TCP_STREAM?

Nope, even 1 netperf was sufficient.  Before I couldn't get TCP_STREAM
to send small byte packets, but I checked my script now and I forgot
to enable TCP_NODELAY + send buffer size (-s $size).

With one tcp sender I am unable to reach the 1Gb/s limit (only
~100Mb/s) even with a lot of CPU to spare, which indicates that the
test is limited by e2e latency.  With 10 connections, I could get only
800Mb/s, and with 20 connections it went to 1160Mb/s, which violates
the 1Gb/s limit set.

> I couldn't recall if GSO did anything for UDP, so did some quick and dirty
> tests flipping GSO on and off on a 3.2.0 kernel, and the service demands
> didn't seem to change.  So, with 8000 bytes of user payload did HTB actually
> see 8000ish byte packets, or did it actually see a series of <= MTU sized IP
> datagram fragments?  Or did the NIC being used have UFO enabled?
>

UFO was enabled.  Now, I verified that the throughput was about the
same even with TCP, with TSO and by forcing send/recv buffer sizes to
8kB.

> Which reported throughput was used from the UDP_STREAM tests - send side or
> receive side?

Send side.

>
> Is there much/any change in service demand on a netperf test?  That is what
> is the service demand of a mumble_STREAM test running through the old HTB
> versus the new HTB?  And/or the performance of a TCP_RR test (both
> transactions per second and service demand per transaction) before vs after.
>

At 1Gb/s with just one TCP_STREAM:
With old HTB:
Sdem local: 0.548us/KB, Sdem remote: 1.426us/KB.

With new HTB:
Sdem local: 0.598us/KB, Sdem remote: 1.089us/KB.

TCP_RR: 1b req/response consumed very little bandwidth (~12Mb/s)
old HTB at 1Gb/s
Sdem local: 14.738us/trans, Sdem remote: 11.485us/Tran, latency: 41.622us/Tran.

new HTB at 1Gb/s
Sdem local: 14.505us/trans, Sdem remote: 11.440us/Tran, latency: 41.709us/Tran.

With multiple tests, these values are fairly stable. :)

thanks,
> happy benchmarking,
>
> rick jones

-- 
Vimal
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ