[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ED45576F-F1F4-452F-80CF-AACC723BFE7E@infradead.org>
Date: Wed, 10 Apr 2024 13:09:45 +0100
From: David Woodhouse <dwmw2@...radead.org>
To: paul@....org, Paul Durrant <xadimgnik@...il.com>,
Jack Allister <jalliste@...zon.com>
CC: bp@...en8.de, corbet@....net, dave.hansen@...ux.intel.com, hpa@...or.com,
kvm@...r.kernel.org, linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
mingo@...hat.com, pbonzini@...hat.com, seanjc@...gle.com, tglx@...utronix.de,
x86@...nel.org, Dongli Zhang <dongli.zhang@...cle.com>
Subject: Re: [PATCH v2 1/2] KVM: x86: Add KVM_[GS]ET_CLOCK_GUEST for accurate KVM clock migration
On 10 April 2024 11:29:13 BST, Paul Durrant <xadimgnik@...il.com> wrote:
>On 10/04/2024 10:52, Jack Allister wrote:
>> + * It's possible that this vCPU doesn't have a HVCLOCK configured
>> + * but the other vCPUs may. If this is the case calculate based
>> + * upon the time gathered in the seqcount but do not update the
>> + * vCPU specific PVTI. If we have one, then use that.
>
>Given this is a per-vCPU ioctl, why not fail in the case the vCPU doesn't have HVCLOCK configured? Or is your intention that a GET/SET should always work if TSC is stable?
It definitely needs to work for SET even when the vCPU hasn't been run yet (and doesn't have a hvclock in vcpu->arch.hv_clock).
I think it should ideally work for GET too. I did try arguing that if the vCPU hasn't set up its pvclock then why would it care if it's inaccurate? But there's a pathological case of AMP where one vCPU is dedicated to an RTOS or something, and only the *other* vCPUs bring up their pvclock.
This of course brings you to the question of why we have it as a per-vCPU ioctl at all? It only needs to be done *once* to get/set the KVM-wide clock
And a function of *this* vCPU's TSC. And the point is that if we're in use_master_clock mode, that's consistent across *all* vCPUs. There would be a bunch of additional complexity in making it a VM ioctl though, especially around the question of what to do if userspace tries to restore it when there *aren't* any vCPUs yet. So we didn't do that.
Powered by blists - more mailing lists