[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2abe4b19-e41e-34f9-0a3c-30812c7b719e@redhat.com>
Date: Thu, 8 Apr 2021 14:25:41 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Marcelo Tosatti <mtosatti@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
vkuznets@...hat.com, dwmw@...zon.co.uk
Subject: Re: [PATCH 1/2] KVM: x86: reduce pvclock_gtod_sync_lock critical
sections
On 08/04/21 14:00, Marcelo Tosatti wrote:
>>
>> KVM_REQ_MCLOCK_INPROGRESS is only needed to kick running vCPUs out of the
>> execution loop;
> We do not want vcpus with different system_timestamp/tsc_timestamp
> pair:
>
> * To avoid that problem, do not allow visibility of distinct
> * system_timestamp/tsc_timestamp values simultaneously: use a master
> * copy of host monotonic time values. Update that master copy
> * in lockstep.
>
> So KVM_REQ_MCLOCK_INPROGRESS also ensures that no vcpu enters
> guest mode (via vcpu->requests check before VM-entry) with a
> different system_timestamp/tsc_timestamp pair.
Yes this is what KVM_REQ_MCLOCK_INPROGRESS does, but it does not have to
be done that way. All you really need is the IPI with KVM_REQUEST_WAIT,
which ensures that updates happen after the vCPUs have exited guest
mode. You don't need to loop on vcpu->requests for example, because
kvm_guest_time_update could just spin on pvclock_gtod_sync_lock until
pvclock_update_vm_gtod_copy is done.
So this morning I tried protecting the kvm->arch fields for kvmclock
using a seqcount, which is nice also because get_kvmclock_ns() does not
have to bounce the cacheline of pvclock_gtod_sync_lock anymore. I'll
post it tomorrow or next week.
Paolo
Powered by blists - more mailing lists