[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e6caee13-f8f7-596c-fb37-6120e7c25f99@redhat.com>
Date: Tue, 18 Feb 2020 16:33:44 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <kernellwp@...il.com>
Cc: Sean Christopherson <sean.j.christopherson@...el.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Subject: Re: [PATCH v4 1/2] KVM: X86: Less kvmclock sync induced vmexits after
VM boots
On 18/02/20 15:54, Vitaly Kuznetsov wrote:
>> - schedule_delayed_work(&kvm->arch.kvmclock_sync_work,
>> - KVMCLOCK_SYNC_PERIOD);
>> + if (vcpu->vcpu_idx == 0)
>> + schedule_delayed_work(&kvm->arch.kvmclock_sync_work,
>> + KVMCLOCK_SYNC_PERIOD);
>> }
>>
>> void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
> Forgive me my ignorance, I was under the impression
> schedule_delayed_work() doesn't do anything if the work is already
> queued (see queue_delayed_work_on()) and we seem to be scheduling the
> same work (&kvm->arch.kvmclock_sync_work) which is per-kvm (not
> per-vcpu).
No, it executes after 5 minutes. I agree that the patch shouldn't be
really necessary, though you do save on cacheline bouncing due to
test_and_set_bit.
Paolo
> Do we actually happen to finish executing it before next vCPU
> is created or why does the storm you describe happens?
Powered by blists - more mailing lists