lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 17 Sep 2009 17:17:08 -0700
From:	Alok Kataria <akataria@...are.com>
To:	Ingo Molnar <mingo@...e.hu>, Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>
Cc:	the arch/x86 maintainers <x86@...nel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Jeremy Fitzhardinge <jeremy@...p.org>,
	Chris Wright <chrisw@...s-sol.org>,
	Rusty Russell <rusty@...tcorp.com.au>,
	virtualization@...ts.osdl.org
Subject: Paravirtualization on VMware's Platform [VMI].

Hi,

We ran a few experiments to compare performance of VMware's
paravirtualization technique (VMI) and hardware MMU technologies (HWMMU)
on VMware's hypervisor.

To give some background, VMI is VMware's paravirtualization
specification which tries to optimize CPU and MMU operations of the
guest operating system. For more information take a look at this
http://www.vmware.com/interfaces/paravirtualization.html

In most of the benchmarks, EPT/NPT (hwmmu) technologies are at par or
provide better performance compared to VMI.
The experiments included comparing performance across various micro and
real world like benchmarks.

Host configuration used for testing.
* Dell PowerEdge 2970
* 2 x quad-core AMD Opteron 2384 2.7GHz (Shanghai C2), RVI capable.
* 8 GB (4 x 2GB) memory, NUMA enabled
* 2 x 300GB RAID 0 storage
* 2 x embedded 1Gb NICs (Braodcom NetXtreme II BCM5708 1000Base-T)
* Running developement build of ESX.

The guest VM was a SLES 10 SP2 based VM for both the VMI and non-VMI
case. kernel version: 2.6.16.60-0.37_f594963d-vmipae.

Below is a short summary of performance results between HWMMU and VMI.
These results are averaged over 9 runs. The memory was sized at 512MB
per VCPU in all experiments.
For the ratio results comparing hwmmu technologies to vmi, higher than 1
means hwmmu is better than vmi.

compile workloads - 4-way : 1.02, i.e. about 2% better.
compile workloads - 8-way : 1.14, i,e. 14% better.
oracle swingbench - 4-way (small pages) : 1.34, i.e. 34% better.
oracle swingbench - 4-way (large pages) : 1.03, i.e. 3% better.
specjbb (large pages)   : 0.99, i.e. 1% degradation.

Please note that specjbb is the worst case benchmark for hwmmu, due to
the higher TLB miss latency, so it's a good result that the worst case
benchmark has a degradation of only 1%.

VMware expects that these hardware virtualization features will be
ubiquitous by 2011.

Apart from the performance benefit, VMI was important for Linux on
VMware's platform, from timekeeping point of view, but with the tickless
kernels and TSC improvements that were done for the mainline tree, we
think VMI has outlived those requirements too.

In light of these results and availability of such hardware, we have
decided to stop supporting VMI in our future products.

Given this new development, I wanted to discuss how should we go about
retiring the VMI code from mainline Linux, i.e. the vmi_32.c and
vmiclock_32.c bits.

One of the options that I am contemplating is to drop the code from the
tip tree in this release cycle, and given that this should be a low risk
change we can remove it from Linus's tree later in the merge cycle.

Let me know your views on this or if you think we should do this some
other way.

Thanks,
Alok


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ