[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1510192934-5369-1-git-send-email-wanpeng.li@hotmail.com>
Date: Wed, 8 Nov 2017 18:02:11 -0800
From: Wanpeng Li <kernellwp@...il.com>
To: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
Wanpeng Li <wanpeng.li@...mail.com>
Subject: [PATCH RESEND 0/3] KVM: Paravirt remote TLB flush
Remote flushing api's does a busy wait which is fine in bare-metal
scenario. But with-in the guest, the vcpus might have been pre-empted
or blocked. In this scenario, the initator vcpu would end up
busy-waiting for a long amount of time.
This patch set implements para-virt flush tlbs making sure that it
does not wait for vcpus that are sleeping. And all the sleeping vcpus
flush the tlb on guest enter. Idea was discussed here:
https://lkml.org/lkml/2012/2/20/157
The best result is achieved when we're overcommiting the host by running
multiple vCPUs on each pCPU. In this case PV tlb flush avoids touching
vCPUs which are not scheduled and avoid the wait on the main CPU.
In addition, thanks for commit 9e52fc2b50d ("x86/mm: Enable RCU based
page table freeing (CONFIG_HAVE_RCU_TABLE_FREE=y)")
Test on a Haswell i7 desktop 4 cores (2HT), so 8 pCPUs, running ebizzy in
one linux guest.
ebizzy -M
vanilla optimized boost
8 vCPUs 10152 10083 -0.68%
16 vCPUs 1224 4866 297.5%
24 vCPUs 1109 3871 249%
32 vCPUs 1025 3375 229.3%
Wanpeng Li (3):
KVM: Add vCPU running/preempted state
KVM: Add paravirt remote TLB flush
KVM: Add flush_on_enter before guest enter
arch/x86/include/uapi/asm/kvm_para.h | 4 ++++
arch/x86/kernel/kvm.c | 31 ++++++++++++++++++++++++++++++-
arch/x86/kvm/x86.c | 12 ++++++++++--
3 files changed, 44 insertions(+), 3 deletions(-)
--
2.7.4
Powered by blists - more mailing lists