[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <542C4E0D.4050404@mojatatu.com>
Date: Wed, 01 Oct 2014 14:55:09 -0400
From: Jamal Hadi Salim <jhs@...atatu.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
CC: Tom Herbert <therbert@...gle.com>,
David Miller <davem@...emloft.net>,
Linux Netdev List <netdev@...r.kernel.org>,
Eric Dumazet <eric.dumazet@...il.com>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
Florian Westphal <fw@...len.de>,
Daniel Borkmann <dborkman@...hat.com>,
Alexander Duyck <alexander.duyck@...il.com>,
John Fastabend <john.r.fastabend@...el.com>,
Dave Taht <dave.taht@...il.com>,
Toke Høiland-Jørgensen <toke@...e.dk>
Subject: Re: [net-next PATCH V5] qdisc: bulk dequeue support for qdiscs with
TCQ_F_ONETXQUEUE
On 10/01/14 13:28, Jesper Dangaard Brouer wrote:
> Thus, code is activated only when q->qlen is >= 1. And I have already
> shown that we see a win with just bulking 2 packets:
If you can get 2 packets, indeed you win. If you can on average get >1
over a long period, you still win.
You have clearly demonstrated you can do that with traffic
generators (udp or in kernel pktgen). I was more worried about the
common use case scenario (handwaved as 1-24 TCP streams).
The key here is: *if you never hit bulking* then the cost is
_per packet_ for sch_direct_xmit bypass.
Question is what is that cost for the common case as defined above?
Can you hit a bulk level >1 on 1-24 TCP streams?
I would be happy if your answer is *yes*. If your answer is no (since
it is hard to achieve) - then how far off is it from before your
patches (since now you have added at minimal a branch check).
I think it is fair for you to quantify that, no?
Feature is still useful for the other cases.
Note:
This is what i referred to as the "no animals were hurt during the
making of these patches" statement. I am sorry again for raining on
the parade.
cheers,
jamal
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists