lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <063D6719AE5E284EB5DD2968C1650D6D1747D7F3@AcuExch.aculab.com>
Date:	Thu, 21 Aug 2014 14:29:41 +0000
From:	David Laight <David.Laight@...LAB.COM>
To:	'Razya Ladelsky' <RAZYA@...ibm.com>,
	"Michael S. Tsirkin" <mst@...hat.com>
CC:	"abel.gordon@...il.com" <abel.gordon@...il.com>,
	Alex Glikson <GLIKSON@...ibm.com>,
	Eran Raichstein <ERANRA@...ibm.com>,
	Joel Nider <JOELN@...ibm.com>,
	"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"virtualization@...ts.linux-foundation.org" 
	<virtualization@...ts.linux-foundation.org>,
	Yossi Kuperman1 <YOSSIKU@...ibm.com>
Subject: RE: [PATCH] vhost: Add polling mode

From: Razya Ladelsky
> "Michael S. Tsirkin" <mst@...hat.com> wrote on 20/08/2014 01:57:10 PM:
> 
> > > Results:
> > >
> > > Netperf, 1 vm:
> > > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046 MB/sec).
> > > Number of exits/sec decreased 6x.
> > > The same improvement was shown when I tested with 3 vms running netperf
> > > (4086 MB/sec -> 5545 MB/sec).
> > >
> > > filebench, 1 vm:
> > > ops/sec improved by 13% with the polling patch. Number of exits
> > > was reduced by 31%.
> > > The same experiment with 3 vms running filebench showed similar numbers.
> > >
> > > Signed-off-by: Razya Ladelsky <razya@...ibm.com>
> >
> > This really needs more thourough benchmarking report, including
> > system data.  One good example for a related patch:
> > http://lwn.net/Articles/551179/
> > though for virtualization, we need data about host as well, and if you
> > want to look at streaming benchmarks, you need to test different message
> > sizes and measure packet size.
> >
> 
> Hi Michael,
> I have already tried running netperf with several message sizes:
> 64,128,256,512,600,800...
> But the results are inconsistent even in the baseline/unpatched
> configuration.
> For smaller msg sizes, I get consistent numbers. However, at some point,
> when I increase the msg size
> I get unstable results. For example, for a 512B msg, I get two scenarios:
> vm utilization 100%, vhost utilization 75%, throughput ~6300
> vm utilization 80%, vhost utilization 13%, throughput ~9400 (line rate)
> 
> I don't know why vhost is behaving that way for certain message sizes.
> Do you have any insight to why this is happening?

Have you tried looking at the actual ethernet packet sizes.
It may well jump between using small packets (the size of the writes)
and full sized ones.

If you are trying to measure ethernet packet 'cost' you need to use UDP.
However that probably uses different code paths.

	David



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ