[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120427161727.27082.43096.stgit@abhimanyu>
Date: Fri, 27 Apr 2012 21:53:02 +0530
From: "Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>
To: peterz@...radead.org, mingo@...e.hu
Cc: jeremy@...p.org, mtosatti@...hat.com, kvm@...r.kernel.org,
x86@...nel.org, vatsa@...ux.vnet.ibm.com,
linux-kernel@...r.kernel.org, avi@...hat.com, hpa@...or.com
Subject: [RFC PATCH v1 0/5] KVM paravirt remote flush tlb
Remote flushing api's does a busy wait which is fine in bare-metal
scenario. But with-in the guest, the vcpus might have been pre-empted
or blocked. In this scenario, the initator vcpu would end up
busy-waiting for a long amount of time.
This was discovered in our gang scheduling test and other way to solve
this is by para-virtualizing the flush_tlb_others_ipi.
This patch set implements para-virt flush tlbs making sure that it
does not wait for vcpus that are sleeping. And all the sleeping vcpus
flush the tlb on guest enter. Idea was discussed here:
https://lkml.org/lkml/2012/2/20/157
This patch depends on ticketlocks[1] and KVM Paravirt Spinlock patches[2]
Based to 3.4.0-rc4 (commit: af3a3ab2)
Here are the results from non-PLE hardware. Running ebizzy workload
inside the VMs. The table shows the normalized ebizzy score wrt to the
baseline.
Machine:
8CPU Intel Xeon, HT disabled, 64 bit VM(8vcpu, 1G RAM)
Gang pv_spin pv_flush pv_spin_flush
1VM 1.01 0.30 1.01 0.49
2VMs 7.07 0.53 0.91 4.04
4VMs 9.07 0.59 0.31 5.27
8VMs 9.99 1.58 0.48 7.65
Perf report from the guest VM:
Base:
41.25% [k] flush_tlb_others_ipi
41.21% [k] __bitmap_empty
7.66% [k] _raw_spin_unlock_irqrestore
3.07% [.] __memcpy_ssse3_back
1.20% [k] clear_page
gang:
22.92% [.] __memcpy_ssse3_back
15.46% [k] _raw_spin_unlock_irqrestore
9.82% [k] clear_page
6.35% [k] do_page_fault
4.57% [k] down_read_trylock
3.36% [k] __mem_cgroup_commit_charge
3.26% [k] __x2apic_send_IPI_mask
3.23% [k] up_read
2.87% [k] __bitmap_empty
2.78% [k] flush_tlb_others_ipi
pv_spin:
34.82% [k] __bitmap_empty
34.75% [k] flush_tlb_others_ipi
25.10% [k] _raw_spin_unlock_irqrestore
1.52% [.] __memcpy_ssse3_back
pv_flush:
37.34% [k] _raw_spin_unlock_irqrestore
18.26% [k] native_halt
11.58% [.] __memcpy_ssse3_back
4.83% [k] clear_page
3.68% [k] do_page_fault
pv_spin_flush:
71.13% [k] _raw_spin_unlock_irqrestore
8.89% [.] __memcpy_ssse3_back
4.68% [k] native_halt
3.92% [k] clear_page
2.31% [k] do_page_fault
So looking at the perf output for pv_flush and pv_spin_flush, in both
the cases all the flush_tlb_others_ipi is no more contending for the
cpu and relinquishing the cpu for progress.
Comments?
Regards
Nikunj
1. https://lkml.org/lkml/2012/4/19/335
2. https://lkml.org/lkml/2012/4/23/123
---
Nikunj A. Dadhania (5):
KVM Guest: Add VCPU running/pre-empted state for guest
KVM-HV: Add VCPU running/pre-empted state for guest
KVM: Add paravirt kvm_flush_tlb_others
KVM: export kvm_kick_vcpu for pv_flush
KVM: Introduce PV kick in flush tlb
arch/x86/include/asm/kvm_host.h | 7 ++++
arch/x86/include/asm/kvm_para.h | 11 ++++++
arch/x86/include/asm/tlbflush.h | 9 +++++
arch/x86/kernel/kvm.c | 52 +++++++++++++++++++++++++-----
arch/x86/kvm/cpuid.c | 1 +
arch/x86/kvm/x86.c | 50 ++++++++++++++++++++++++++++-
arch/x86/mm/tlb.c | 68 +++++++++++++++++++++++++++++++++++++++
7 files changed, 188 insertions(+), 10 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists