[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <542C5E8B.7070204@mojatatu.com>
Date: Wed, 01 Oct 2014 16:05:31 -0400
From: Jamal Hadi Salim <jhs@...atatu.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
CC: Tom Herbert <therbert@...gle.com>,
David Miller <davem@...emloft.net>,
Linux Netdev List <netdev@...r.kernel.org>,
Eric Dumazet <eric.dumazet@...il.com>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
Florian Westphal <fw@...len.de>,
Daniel Borkmann <dborkman@...hat.com>,
Alexander Duyck <alexander.duyck@...il.com>,
John Fastabend <john.r.fastabend@...el.com>,
Dave Taht <dave.taht@...il.com>,
Toke Høiland-Jørgensen <toke@...e.dk>
Subject: Re: [net-next PATCH V5] qdisc: bulk dequeue support for qdiscs with
TCQ_F_ONETXQUEUE
On 10/01/14 15:47, Jesper Dangaard Brouer wrote:
>
> Answer is yes. It is very easy with simple netperf TCP_STREAM to cause
> queueing >1 packet in the qdisc layer.
If that is the case, I withdraw any doubts i had.
Can you please specify this in your commit logs for patch 0?
> If tuned (according to my blog,
> unloading netfilter etc.) then a single netperf TCP_STREAM will max out
> 10Gbit/s and cause a standing queue.
>
You should describe such tuning in the patch log (hard to read
blogs for more than 30 seconds; write a paper if you want to provide
more details).
> I'm monitoring backlog of qdiscs, and I always see >1 backlog, I never
> saw a standing queue of 1 packet in my testing. Either the backlog
> area is high 100-200 packets, or 0 backlog. (With fake pktgen/trafgen
> style tests, it's possible to cause 1000 backlog).
It would be nice to actually collect such stats. Monitoring the backlog
via dumping qdisc stats is a good start - but actually keeping traces
of average bulk size is more useful.
cheers,
jamal
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists