lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 17 Feb 2020 18:40:19 +0800 From: Wanpeng Li <kernellwp@...il.com> To: LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org> Cc: Paolo Bonzini <pbonzini@...hat.com>, Sean Christopherson <sean.j.christopherson@...el.com>, Wanpeng Li <wanpengli@...cent.com>, Vitaly Kuznetsov <vkuznets@...hat.com>, Jim Mattson <jmattson@...gle.com>, Joerg Roedel <joro@...tes.org> Subject: Re: [PATCH v3 1/2] KVM: X86: Less kvmclock sync induced vmexits after VM boots On Mon, 17 Feb 2020 at 18:36, Wanpeng Li <kernellwp@...il.com> wrote: > > From: Wanpeng Li <wanpengli@...cent.com> > > In the progress of vCPUs creation, it queues a kvmclock sync worker to > the global > workqueue before each vCPU creation completes. Each worker will be scheduled > after 300 * HZ delay and request a kvmclock update for all vCPUs and kick them > out. This is especially worse when scaling to large VMs due to a lot of vmexits. > Just one worker as a leader to trigger the kvmclock sync request for > all vCPUs is > enough. Sorry for the alignment. > > Signed-off-by: Wanpeng Li <wanpengli@...cent.com> > --- > arch/x86/kvm/x86.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index fb5d64e..d0ba2d4 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -9390,8 +9390,9 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) > if (!kvmclock_periodic_sync) > return; > > - schedule_delayed_work(&kvm->arch.kvmclock_sync_work, > - KVMCLOCK_SYNC_PERIOD); > + if (kvm->created_vcpus == 1) > + schedule_delayed_work(&kvm->arch.kvmclock_sync_work, > + KVMCLOCK_SYNC_PERIOD); > } > > void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) > -- > 2.7.4
Powered by blists - more mailing lists