lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA93jw5orK2FeD20zcFpXGkqGa=kF8E-vGtkdUs7vr54R5AE6g@mail.gmail.com>
Date:	Wed, 1 Oct 2014 22:18:05 -0700
From:	Dave Taht <dave.taht@...il.com>
To:	Jesper Dangaard Brouer <brouer@...hat.com>
Cc:	Jamal Hadi Salim <jhs@...atatu.com>,
	Tom Herbert <therbert@...gle.com>,
	David Miller <davem@...emloft.net>,
	Linux Netdev List <netdev@...r.kernel.org>,
	Eric Dumazet <eric.dumazet@...il.com>,
	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	Florian Westphal <fw@...len.de>,
	Daniel Borkmann <dborkman@...hat.com>,
	Alexander Duyck <alexander.duyck@...il.com>,
	John Fastabend <john.r.fastabend@...el.com>,
	Toke Høiland-Jørgensen <toke@...e.dk>
Subject: Re: [net-next PATCH V5] qdisc: bulk dequeue support for qdiscs with TCQ_F_ONETXQUEUE

>> > I'm monitoring backlog of qdiscs, and I always see >1 backlog, I never
>> > saw a standing queue of 1 packet in my testing.  Either the backlog
>> > area is high 100-200 packets, or 0 backlog.  (With fake pktgen/trafgen
>> > style tests, it's possible to cause 1000 backlog).
>>
>> It would be nice to actually collect such stats. Monitoring the backlog
>> via dumping qdisc stats is a good start - but actually keeping traces
>> of average bulk size is more useful.

I am not huge on averages. Network theorists tend to think about things in
terms of fluid models. Van Jacobson's analogy of a water fountain's
operation is very profound...

While it is nearly impossible for a conventional Van Neuman time sliced
CPU + network to actually act that way, things like BQL and dedicated
pipelining systems like those in DPDK are getting closer to that ideal.

An example of where averages let you down is on the classic 5 minute
data reduction things like mrtg do, where you might see `60% of the
bandwidth (capacity/5 minutes) in use, yet still see drops because
over shorter intervals (capacity/10ms) you have bursts arriving.

Far better to sample queue occupancy at the highest rate you can
manage without heisenbugging the results and look at the detailed
curves.

> I usually also monitors the BQL limits during these tests.
>
>  grep -H . /sys/class/net/eth4/queues/tx-*/byte_queue_limits/{inflight,limit}
>
> To Toke:
>  Perhaps we could convince Toke, to add a netperf-wrapper recorder for
> the BQL inflight and limit?  (It would be really cool to plot together)

I just added a command line mode and support for timestamped limit and
inflight output to bqlmon

Get it here:

https://github.com/dtaht/bqlmon

That should be sufficient for netperf-wrapper to poll it efficiently.
As to how to graph it,
I suppose finding the max(limit) across the entire sample set and
scaling inflight appropriately would be good.

What test(s) would be best to combine it with? rrul? tcp_square_wave?

-- 
Dave Täht

https://www.bufferbloat.net/projects/make-wifi-fast
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ