lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141001223229.6cbaac07@redhat.com>
Date:	Wed, 1 Oct 2014 22:32:29 +0200
From:	Jesper Dangaard Brouer <brouer@...hat.com>
To:	Jamal Hadi Salim <jhs@...atatu.com>
Cc:	Tom Herbert <therbert@...gle.com>,
	David Miller <davem@...emloft.net>,
	Linux Netdev List <netdev@...r.kernel.org>,
	Eric Dumazet <eric.dumazet@...il.com>,
	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	Florian Westphal <fw@...len.de>,
	Daniel Borkmann <dborkman@...hat.com>,
	Alexander Duyck <alexander.duyck@...il.com>,
	John Fastabend <john.r.fastabend@...el.com>,
	Dave Taht <dave.taht@...il.com>,
	Toke Høiland-Jørgensen 
	<toke@...e.dk>, brouer@...hat.com
Subject: Re: [net-next PATCH V5] qdisc: bulk dequeue support for qdiscs with
 TCQ_F_ONETXQUEUE

On Wed, 01 Oct 2014 16:05:31 -0400
Jamal Hadi Salim <jhs@...atatu.com> wrote:

> On 10/01/14 15:47, Jesper Dangaard Brouer wrote:
> 
> >
> > Answer is yes.  It is very easy with simple netperf TCP_STREAM to cause
> > queueing >1 packet in the qdisc layer.
> 
> If that is the case, I withdraw any doubts i had.
> Can you please specify this in your commit logs for patch 0?

I'll try to make it more explicit.
Will resubmit patchset shortly...

Notice it is not difficult cause a queue to form, but it is tricky (not
difficult) to correctly test this patchset.  Perhaps you misread my
statement earlier as "it was difficult to test and cause a queue to form"?


> > If tuned (according to my blog,
> > unloading netfilter etc.) then a single netperf TCP_STREAM will max out
> > 10Gbit/s and cause a standing queue.
> >
> 
> You should describe such tuning in the patch log (hard to read
> blogs for more than 30 seconds; write a paper if you want to provide
> more details).

I think you could read this blog in 30 sec:
 http://netoptimizer.blogspot.dk/2014/04/basic-tuning-for-network-overload.html

My cover letter and testing section... will take you longer that 30
sec, it have grown quite large (and Eric will not even read it :-P ;-))

Believe or not, I've actually restricted and reduced the testing
section.  If you want the hole verbose version of my testing for the
upcoming V6 patch, look at this:

 http://people.netfilter.org/hawk/qdisc/measure12_internal_V6_patch/
 http://people.netfilter.org/hawk/qdisc/measure13_V6_patch_NObulk/

And use netperf-wrapper to dive into the data.
A quick setup guide:
 http://netoptimizer.blogspot.dk/2014/09/mini-tutorial-for-netperf-wrapper-setup.html


> > I'm monitoring backlog of qdiscs, and I always see >1 backlog, I never
> > saw a standing queue of 1 packet in my testing.  Either the backlog
> > area is high 100-200 packets, or 0 backlog.  (With fake pktgen/trafgen
> > style tests, it's possible to cause 1000 backlog).
> 
> It would be nice to actually collect such stats. Monitoring the backlog
> via dumping qdisc stats is a good start - but actually keeping traces
> of average bulk size is more useful.

I usually also monitors the BQL limits during these tests.

 grep -H . /sys/class/net/eth4/queues/tx-*/byte_queue_limits/{inflight,limit}

To Toke:
 Perhaps we could convince Toke, to add a netperf-wrapper recorder for
the BQL inflight and limit?  (It would be really cool to plot together)

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Sr. Network Kernel Developer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ