lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 16 Sep 2014 19:08:23 +0300
From:	Dave Taht <dave.taht@...il.com>
To:	Jesper Dangaard Brouer <brouer@...hat.com>
Cc:	Eric Dumazet <eric.dumazet@...il.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	Stephen Hemminger <stephen@...workplumber.org>,
	Tom Herbert <therbert@...gle.com>,
	David Miller <davem@...emloft.net>,
	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	Daniel Borkmann <dborkman@...hat.com>,
	Florian Westphal <fw@...len.de>,
	Toke Høiland-Jørgensen <toke@...e.dk>
Subject: Re: Qdisc: Measuring Head-of-Line blocking with netperf-wrapper

On Tue, Sep 16, 2014 at 6:56 PM, Jesper Dangaard Brouer
<brouer@...hat.com> wrote:
> On Tue, 16 Sep 2014 06:59:19 -0700
> Eric Dumazet <eric.dumazet@...il.com> wrote:
>
>> With the TCP usec rtt work I did lately, you'll get more precise results
>> from a TCP_RR flow, as Tom and I explained.
>
> Here you go, developed a new test:
>  http://people.netfilter.org/hawk/qdisc/experiment01/README.txt
>  http://people.netfilter.org/hawk/qdisc/experiment01/qdisc_prio_hol.conf
>  https://github.com/netoptimizer/netperf-wrapper/commit/7d0241a78e5
>
> The test includes both a TCP_RR and UDP_RR test that derive the
> latency, also kept the ping tests for comparison.

You have incidentally overwhelmed *me* with data. Thank you very much
for including the *.json* files in your experiments, I'll be able to parse
and compare them later with netperf-wrapper when I get more time.

> One problem: The NoneXSO test is basically invalid, because I cannot
> make it exhaust the bandwidth, see "Total Upload":
>  http://people.netfilter.org/hawk/qdisc/experiment01/qdisc_prio_hol__totals--NoneXSO_net_next.png

Well, i've long conceded that TSO and GSO offloads were needed at 10GigE speeds.
I'd love to get a grip on how bursty they are since some moderation
fixes landed a few
versions back.

>
> Looking at the ping test, there is a clear difference between priority
> bands, this just shows that the priority band are working as expected
> and the qdisc is backlogged.
>  http://people.netfilter.org/hawk/qdisc/experiment01/qdisc_prio_hol__ping--GSO_net_next.png
> E.g. ping test for NoneXSO show it is not backlogged, a broken test:
>  http://people.netfilter.org/hawk/qdisc/experiment01/qdisc_prio_hol__ping--NoneXSO_net_next.png
>

I look forward to seeing sch_fq and fq_codel data, for comparison.

>
> Zooming in on the high priority band, we see how the different
> high-prio band measurements are working.
> Here for GSO:
>  http://people.netfilter.org/hawk/qdisc/experiment01/qdisc_prio_hol__ping_hiprio--GSO_net_next.png
> Here for TSO:
>  http://people.netfilter.org/hawk/qdisc/experiment01/qdisc_prio_hol__ping_hiprio--TSO_net_next.png
>
> I've created a new graph called "rr_latency" that further zooms in on
> the difference between TCP_RR and UDP_RR measurements:
> Here for GSO:
>  http://people.netfilter.org/hawk/qdisc/experiment01/qdisc_prio_hol__rr_latency--GSO_net_next.png
> Here for TSO:
>  http://people.netfilter.org/hawk/qdisc/experiment01/qdisc_prio_hol__rr_latency--TSO_net_next.png
> A compare graph:
>  http://people.netfilter.org/hawk/qdisc/experiment01/compare_TSO_vs_GSO__rr_latency.png
>
> I found the interactions a little strange in the above graphs.


I note that toke had started coding netperf-wrapper back in the day
when 10mbits and RTTs measured in seconds were the norm. I am
DELIGHTED to see it works at all at 10GigE. Other network measurement
tools, like netalyzr, peak out at 20mbits....

You can capture more detail about the tc setup, in particular, if you
invoke it with the -x option.

You might get more detail on your plots if you run as root with
--step-size .01 for a 10ms sampling interval
rather than a 200ms one. This doesn't quite work on a few older tests,
notably rrul.

> Even more strange, I started to play with the ixgbe cleanup interval,
> adjusting via cmdline:
>  sudo ethtool -C eth4 rx-usecs 30
>
> Then the "rr_latency" graph change, significantly lowering the latetency.
> Here for GSO:
>  http://people.netfilter.org/hawk/qdisc/experiment01/qdisc_prio_hol__rr_latency--rxusecs30_GSO_net_next.png
> Here for TSO:
>  http://people.netfilter.org/hawk/qdisc/experiment01/qdisc_prio_hol__totals--rxusecs30_TSO_net_next.png

There is always a tradeoff between better batching and better latency.
I've kind of hoped that with the new post-ivy-bridge architectures,
that the weight, even napi weight, was shifting towards dealing with
low latency packets already in cache was more efficient than batching
up processing. The numbers the DPDK folk were getting were astounding.

(but still can't make heads or tails of where you are going with all this)

> Compare graph for GSO:
>  http://people.netfilter.org/hawk/qdisc/experiment01/compare_GSO_vs_GSO_with_rxusec30__rr_latency.png
> Compare graph for TSO:
>  http://people.netfilter.org/hawk/qdisc/experiment01/compare_TSO_vs_TSO_with_rxusec30__rr_latency.png
> Comparing TSO vs GSO both with rx-usecs 30, which is almost equal.
>  http://people.netfilter.org/hawk/qdisc/experiment01/compare_TSO_vs_GSO_both_with_rxusec30__rr_latency.png
>
>
> Checking ping, still follow TCP_RR and UDP_RR, with rx-usecs 30:
>  http://people.netfilter.org/hawk/qdisc/experiment01/qdisc_prio_hol__ping_hiprio--rxusecs30_GSO_net_next.png
>  http://people.netfilter.org/hawk/qdisc/experiment01/qdisc_prio_hol__ping_hiprio--rxusecs30_TSO_net_next.png

I have generally found that it is easier to present all this data on
graphs or combined graph, on a web page, rather than in email.

> --
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Sr. Network Kernel Developer at Red Hat
>   Author of http://www.iptv-analyzer.org
>   LinkedIn: http://www.linkedin.com/in/brouer



-- 
Dave Täht

https://www.bufferbloat.net/projects/make-wifi-fast
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ