lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 19 Sep 2014 07:57:37 -0400
From:	Jamal Hadi Salim <jhs@...atatu.com>
To:	Jesper Dangaard Brouer <jbrouer@...hat.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC:	"David S. Miller" <davem@...emloft.net>,
	Tom Herbert <therbert@...gle.com>,
	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	Florian Westphal <fw@...len.de>,
	Daniel Borkmann <dborkman@...hat.com>,
	Alexander Duyck <alexander.duyck@...il.com>,
	John Fastabend <john.r.fastabend@...el.com>
Subject: Re: qdisc/trafgen: Measuring effect of qdisc bulk dequeue, with trafgen

On 09/19/14 06:35, Jesper Dangaard Brouer wrote:
>
> This experiment were about finding the tipping-point, when bulking
> from the qdisc kicks in.  This is an artificial benchmark.
>
> This testing relates to my qdisc bulk dequeue patches:
>   http://thread.gmane.org/gmane.linux.network/328829/focus=328951
>
> My point have always been, we should only start bulking packets when
> really needed, I dislike attempts to delay TX in antisipation of
> packets arriving shortly (due to the added latency).  IMHO the qdisc
> layer seems the right place "see" when bulking makes sense.
>
> The reason behind this test is, there is two code paths in the qdisc
> layer.  1) when qdisc is empty we allow packet to directly call
> sch_direct_xmit(), 2) when qdisc contains packet we go through a more
> expensive process of enqueue, dequeue and possibly rescheduling a
> softirq.
>
> Thus, the cost when the qdisc kicks-in should be slightly higher.  My
> qdisc bulk dequeue patch, should help us actually getting faster in
> this case.  Below results (with dequeue bulking max 4 packets) show
> that, this was true, as expected the locking cost were reduced, giving
> us an actual speedup.
>
>
> Testing this tipping point is hard, but found an trafgen setup, that
> were just balancing on this tipping point, single CPU 1Gbit/s setup
> driver igb.
>

The feedback system is clearly very well oiled. Or is it now? ;->
Jesper, maybe you need to poke at system level as opposed to
microscopic lock level. The transmit path is essentially kicked by
tx softirq which is driven by rx path etc. And those guys work like
a clock pendulum.
To busy that sucker, You may be able to get more luck with
forwarding kind of traffic.
Funnel traffic from many nic ports tied to different CPUs to one egress
port.
Some coffee helped me remember i actually surrendered that it can be
done at all in netconf 2011[1] but please let me not poison your
thinking - you may find otherwise.

cheers,
jamal

http://vger.kernel.org/netconf2011_slides/jamal_netconf2011.pdf slide 12
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ