lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1502095466-21312-1-git-send-email-longpeng2@huawei.com>
Date:   Mon, 7 Aug 2017 16:44:23 +0800
From:   "Longpeng(Mike)" <longpeng2@...wei.com>
To:     <pbonzini@...hat.com>, <rkrcmar@...hat.com>
CC:     <agraf@...e.com>, <borntraeger@...ibm.com>, <cohuck@...hat.com>,
        <christoffer.dall@...aro.org>, <marc.zyngier@....com>,
        <james.hogan@...tec.com>, <kvm@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <weidong.huang@...wei.com>,
        <arei.gonglei@...wei.com>, <wangxinxin.wang@...wei.com>,
        <longpeng.mike@...il.com>, <david@...hat.com>,
        "Longpeng(Mike)" <longpeng2@...wei.com>
Subject: [PATCH 0/3] KVM: optimize the kvm_vcpu_on_spin

This is a simple optimization for kvm_vcpu_on_spin, the
main idea is described in patch-1's commit msg.

I did some tests base on the RFC version, the result shows
that it can improves the performance slightly.

== Geekbench-3.4.1 ==
VM1: 	8U,4G, vcpu(0...7) is 1:1 pinned to pcpu(6...11,18,19)
	running Geekbench-3.4.1 *10 truns*
VM2/VM3/VM4: configure is the same as VM1
	stress each vcpu usage(seed by top in guest) to 40%

The comparison of each testcase's score:
(higher is better)
		before		after		improve
Inter
 single		1176.7		1179.0		0.2%
 multi		3459.5		3426.5		-0.9%
Float
 single		1150.5		1150.9		0.0%
 multi		3364.5		3391.9		0.8%
Memory(stream)
 single		1768.7		1773.1		0.2%
 multi		2511.6		2557.2		1.8%
Overall
 single		1284.2		1286.2		0.2%
 multi		3231.4		3238.4		0.2%


== kernbench-0.42 ==
VM1:    8U,12G, vcpu(0...7) is 1:1 pinned to pcpu(6...11,18,19)
        running "kernbench -n 10"
VM2/VM3/VM4: configure is the same as VM1
        stress each vcpu usage(seed by top in guest) to 40%

The comparison of 'Elapsed Time':
(sooner is better)
		before		after		improve
load -j4	12.762		12.751		0.1%
load -j32	9.743		8.955		8.1%
load -j		9.688		9.229		4.7%


Physical Machine:
  Architecture:          x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Byte Order:            Little Endian
  CPU(s):                24
  On-line CPU(s) list:   0-23
  Thread(s) per core:    2
  Core(s) per socket:    6
  Socket(s):             2
  NUMA node(s):          2
  Vendor ID:             GenuineIntel
  CPU family:            6
  Model:                 45
  Model name:            Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz
  Stepping:              7
  CPU MHz:               2799.902
  BogoMIPS:              5004.67
  Virtualization:        VT-x
  L1d cache:             32K
  L1i cache:             32K
  L2 cache:              256K
  L3 cache:              15360K
  NUMA node0 CPU(s):     0-5,12-17
  NUMA node1 CPU(s):     6-11,18-23

---
Changes since RFC:
 - only cache result for X86. [David & Cornlia & Paolo]
 - add performance numbers. [David]
 - impls arm/s390. [Christoffer & David]
 - refactor the impls. [me]

---
Longpeng(Mike) (3):
  KVM: add spinlock-exiting optimize framework
  KVM: X86: implement the logic for spinlock optimization
  KVM: implement spinlock optimization logic for arm/s390

 arch/mips/kvm/mips.c            | 10 ++++++++++
 arch/powerpc/kvm/powerpc.c      | 10 ++++++++++
 arch/s390/kvm/kvm-s390.c        | 10 ++++++++++
 arch/x86/include/asm/kvm_host.h |  5 +++++
 arch/x86/kvm/svm.c              |  6 ++++++
 arch/x86/kvm/vmx.c              | 20 ++++++++++++++++++++
 arch/x86/kvm/x86.c              | 15 +++++++++++++++
 include/linux/kvm_host.h        |  2 ++
 virt/kvm/arm/arm.c              | 10 ++++++++++
 virt/kvm/kvm_main.c             |  4 ++++
 10 files changed, 92 insertions(+)

-- 
1.8.3.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ