[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1530526462-920-1-git-send-email-wanpengli@tencent.com>
Date: Mon, 2 Jul 2018 18:14:20 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: [PATCH v2 0/2] KVM: x86: Add PV IPIs support
Using hypercall to send IPIs by one vmexit instead of one by one for
xAPIC/x2APIC physical mode and one vmexit per-cluster for x2APIC cluster
mode.
Even if enable qemu interrupt remapping and PV TLB Shootdown, I can still
observe ~14% performance boost by ebizzy benchmark for 64 vCPUs VM, the
total msr-induced vmexits reduce ~70%.
The patchset implements the PV IPIs for vCPUs <= 128 VM, this is really
common in cloud environment, after this patchset is applied, I can continue
to add > 64 vCPUs VM support and that implementation has to introduce more
complex logic.
Cc: Paolo Bonzini <pbonzini@...hat.com>
Cc: Radim Krčmář <rkrcmar@...hat.com>
Cc: Vitaly Kuznetsov <vkuznets@...hat.com>
v1 -> v2:
* sparse apic id > 128, or any other errors, fallback to original apic hooks
* have two bitmask arguments so that one hypercall handles 128 vCPUs
* fix KVM_FEATURE_PV_SEND_IPI doc
* document hypercall
* fix NMI selftest fails
* fix build errors reported by 0day
Wanpeng Li (2):
KVM: X86: Implement PV IPI in linux guest
KVM: X86: Implement PV send IPI support
Documentation/virtual/kvm/cpuid.txt | 4 ++
Documentation/virtual/kvm/hypercalls.txt | 6 ++
arch/x86/include/uapi/asm/kvm_para.h | 1 +
arch/x86/kernel/kvm.c | 99 ++++++++++++++++++++++++++++++++
arch/x86/kvm/cpuid.c | 3 +-
arch/x86/kvm/x86.c | 42 ++++++++++++++
include/uapi/linux/kvm_para.h | 1 +
7 files changed, 155 insertions(+), 1 deletion(-)
--
2.7.4
Powered by blists - more mailing lists