lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 17 Sep 2014 09:39:56 +0200
From:	Jesper Dangaard Brouer <brouer@...hat.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	Stephen Hemminger <stephen@...workplumber.org>,
	Tom Herbert <therbert@...gle.com>,
	David Miller <davem@...emloft.net>,
	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	Daniel Borkmann <dborkman@...hat.com>,
	Florian Westphal <fw@...len.de>,
	Toke Høiland-Jørgensen 
	<toke@...e.dk>, Dave Taht <dave.taht@...il.com>, brouer@...hat.com
Subject: Re: Qdisc: Measuring Head-of-Line blocking with netperf-wrapper

On Tue, 16 Sep 2014 09:30:16 -0700
Eric Dumazet <eric.dumazet@...il.com> wrote:

> On Tue, 2014-09-16 at 17:56 +0200, Jesper Dangaard Brouer wrote:
> > On Tue, 16 Sep 2014 06:59:19 -0700
> > Eric Dumazet <eric.dumazet@...il.com> wrote:
> > 
> > > With the TCP usec rtt work I did lately, you'll get more precise results
> > > from a TCP_RR flow, as Tom and I explained.
> > 
> > Here you go, developed a new test:
> 
> Just to make sure I understand (sorry I dont have time going all your
> graphs right now)

Summary for you:
1) I have created the TCP_RR latency test you and Tom asked for.

2) Graphs shows TCP_RR and UDP_RR are more accurate than ping

3) Graphs shows that ping is within same range as TCP_RR and UDP_RR

4) My only problem, NoneXSO case does not work with TCP_RR, and
   I need the NoneXSO case for evaluating my qdisc bulking patches.

> The target of your high prio flow is different from target of the
> antagonist flows ?
> 
> Otherwise, you are not only measuring head of line blocking of your
> host, but the whole chain, including scheduling latencies of the
> (shared) target.

For the target-host I'm avoiding the problem, as it receives packets on
different HW queue and netservers will be running on different CPUs.

For the host, I'm by design, forcing it to run on the same single CPU,
to force using the same HW queue, so I can measure this HW queue and
its/BQLs push back. So, yes the host, is also affected by scheduling
latencies, which is bad. (perhaps reason NoneXSO cannot util BW).

How can I construct a test-case, on the host, to solve this problem?
(and still using/measuring the same xmit HW queue)


-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Sr. Network Kernel Developer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists