lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140925172329.7460f787@redhat.com>
Date:	Thu, 25 Sep 2014 17:23:29 +0200
From:	Jesper Dangaard Brouer <brouer@...hat.com>
To:	Tom Herbert <therbert@...gle.com>
Cc:	Jamal Hadi Salim <jhs@...atatu.com>,
	Eric Dumazet <eric.dumazet@...il.com>,
	Linux Netdev List <netdev@...r.kernel.org>,
	"David S. Miller" <davem@...emloft.net>,
	Alexander Duyck <alexander.h.duyck@...el.com>,
	Toke Høiland-Jørgensen 
	<toke@...e.dk>, Florian Westphal <fw@...len.de>,
	Dave Taht <dave.taht@...il.com>,
	John Fastabend <john.r.fastabend@...el.com>,
	Daniel Borkmann <dborkman@...hat.com>,
	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	brouer@...hat.com
Subject: Re: [net-next PATCH 1/1 V4] qdisc: bulk dequeue support for qdiscs
 with TCQ_F_ONETXQUEUE

On Thu, 25 Sep 2014 08:05:38 -0700
Tom Herbert <therbert@...gle.com> wrote:

> On Thu, Sep 25, 2014 at 7:57 AM, Jesper Dangaard Brouer
> <brouer@...hat.com> wrote:
> > On Thu, 25 Sep 2014 07:40:33 -0700
> > Tom Herbert <therbert@...gle.com> wrote:
> >
> >> A few test results in patch 0 are good. I like to have results for
> >> with and without patch. These should two things: 1) Any regressions
> >> caused by the patch 2) Performance gains (in that order of importance
> >> :-) ). There doesn't need to be a lot here, just something reasonably
> >> representative, simple, and should be easily reproducible. My
> >> expectation in bulk dequeue is that we should see no obvious
> >> regression and hopefully an improvement in CPU utilization-- are you
> >> able to verify this?
> >
> > We are saving 3% CPU, as I described in my post with subject:
> > "qdisc/UDP_STREAM: measuring effect of qdisc bulk dequeue":
> >  http://thread.gmane.org/gmane.linux.network/331152/focus=331154
> >
> > Using UDP_STREAM on 1Gbit/s driver igb, I can show that the
> > _raw_spin_lock calls are reduced with approx 3%, when enabling
> > bulking of just 2 packets.
> >
>
> That's great. In commit log, would be good to have results with
> TCP_STREAM also and please report aggregate CPU utilization changes
> (like from mpstat).

The TCP_STREAM is not a good test for this, because unless disabling
both TSO and GSO the packets will not hit the code path (that this
patch changes).  When we later add support for TSO and GSO bulking,
then it will make sense to include TCP_STREAM testing, not before.

I will redo the tests, once I get home to my testlab, as the remote lab
I'm using now is annoyingly slow rebooting machines, as we not longer
have a runtime option for enable/disable (I'm currently in Switzerland).   

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Sr. Network Kernel Developer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ