[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1563449947-7749-1-git-send-email-wanpengli@tencent.com>
Date: Thu, 18 Jul 2019 19:39:06 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
Christian Borntraeger <borntraeger@...ibm.com>,
Paul Mackerras <paulus@...abs.org>,
Marc Zyngier <maz@...nel.org>
Subject: [PATCH v2 1/2] KVM: Boosting vCPUs that are delivering interrupts
From: Wanpeng Li <wanpengli@...cent.com>
Inspired by commit 9cac38dd5d (KVM/s390: Set preempted flag during vcpu wakeup
and interrupt delivery), except the lock holder, we want to also boost vCPUs
that are delivering interrupts. Actually most smp_call_function_many calls are
synchronous ipi calls, the ipi target vCPUs are also good yield candidates.
This patch introduces vcpu->ready to boost vCPUs during wakeup and interrupt
delivery time.
Testing on 80 HT 2 socket Xeon Skylake server, with 80 vCPUs VM 80GB RAM:
ebizzy -M
vanilla boosting improved
1VM 21443 23520 9%
2VM 2800 8000 180%
3VM 1800 3100 72%
Testing on my Haswell desktop 8 HT, with 8 vCPUs VM 8GB RAM, two VMs,
one running ebizzy -M, the other running 'stress --cpu 2':
w/ boosting + w/o pv sched yield(vanilla)
vanilla boosting improved
1570 4000 155%
w/ boosting + w/ pv sched yield(vanilla)
vanilla boosting improved
1844 5157 179%
w/o boosting, perf top in VM:
72.33% [kernel] [k] smp_call_function_many
4.22% [kernel] [k] call_function_i
3.71% [kernel] [k] async_page_fault
w/ boosting, perf top in VM:
38.43% [kernel] [k] smp_call_function_many
6.31% [kernel] [k] async_page_fault
6.13% libc-2.23.so [.] __memcpy_avx_unaligned
4.88% [kernel] [k] call_function_interrupt
Cc: Paolo Bonzini <pbonzini@...hat.com>
Cc: Radim Krčmář <rkrcmar@...hat.com>
Cc: Christian Borntraeger <borntraeger@...ibm.com>
Cc: Paul Mackerras <paulus@...abs.org>
Cc: Marc Zyngier <maz@...nel.org>
Signed-off-by: Wanpeng Li <wanpengli@...cent.com>
---
arch/s390/kvm/interrupt.c | 2 +-
include/linux/kvm_host.h | 1 +
virt/kvm/kvm_main.c | 12 +++++++++---
3 files changed, 11 insertions(+), 4 deletions(-)
diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
index 9dde4d7..26f8bf4 100644
--- a/arch/s390/kvm/interrupt.c
+++ b/arch/s390/kvm/interrupt.c
@@ -1240,7 +1240,7 @@ void kvm_s390_vcpu_wakeup(struct kvm_vcpu *vcpu)
* The vcpu gave up the cpu voluntarily, mark it as a good
* yield-candidate.
*/
- vcpu->preempted = true;
+ vcpu->ready = true;
swake_up_one(&vcpu->wq);
vcpu->stat.halt_wakeup++;
}
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index c5da875..5c5b586 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -318,6 +318,7 @@ struct kvm_vcpu {
} spin_loop;
#endif
bool preempted;
+ bool ready;
struct kvm_vcpu_arch arch;
struct dentry *debugfs_dentry;
};
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index b4ab59d..8412900 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2404,8 +2404,10 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
int me;
int cpu = vcpu->cpu;
- if (kvm_vcpu_wake_up(vcpu))
+ if (kvm_vcpu_wake_up(vcpu)) {
+ vcpu->ready = true;
return;
+ }
me = get_cpu();
if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu))
@@ -2500,7 +2502,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode)
continue;
} else if (pass && i > last_boosted_vcpu)
break;
- if (!READ_ONCE(vcpu->preempted))
+ if (!READ_ONCE(vcpu->ready))
continue;
if (vcpu == me)
continue;
@@ -4205,6 +4207,8 @@ static void kvm_sched_in(struct preempt_notifier *pn, int cpu)
if (vcpu->preempted)
vcpu->preempted = false;
+ if (vcpu->ready)
+ vcpu->ready = false;
kvm_arch_sched_in(vcpu, cpu);
@@ -4216,8 +4220,10 @@ static void kvm_sched_out(struct preempt_notifier *pn,
{
struct kvm_vcpu *vcpu = preempt_notifier_to_vcpu(pn);
- if (current->state == TASK_RUNNING)
+ if (current->state == TASK_RUNNING) {
vcpu->preempted = true;
+ vcpu->ready = true;
+ }
kvm_arch_vcpu_put(vcpu);
}
--
2.7.4
Powered by blists - more mailing lists