[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6dca783b-6532-4fa7-9e04-1c0a382a00b0@intel.com>
Date: Fri, 19 Apr 2024 16:54:05 -0700
From: "Chen, Zide" <zide.chen@...el.com>
To: David Woodhouse <dwmw2@...radead.org>, Jack Allister
<jalliste@...zon.com>, Paolo Bonzini <pbonzini@...hat.com>,
Jonathan Corbet <corbet@....net>, Sean Christopherson <seanjc@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>,
Shuah Khan <shuah@...nel.org>
Cc: Paul Durrant <paul@....org>, kvm@...r.kernel.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org
Subject: Re: [PATCH 2/2] KVM: selftests: Add KVM/PV clock selftest to prove
timer drift correction
On 4/19/2024 11:43 AM, David Woodhouse wrote:
> On 19 April 2024 19:40:06 BST, David Woodhouse <dwmw2@...radead.org> wrote:
>> On 19 April 2024 18:13:16 BST, "Chen, Zide" <zide.chen@...el.com> wrote:
>>> I'm wondering what's the underling theory that we definitely can achieve
>>> ±1ns accuracy? I tested it on a Sapphire Rapids @2100MHz TSC frequency,
>>> and I can see delta_corrected=2 in ~2% cases.
>>
>> Hm. Thanks for testing!
>>
>> So the KVM clock is based on the guest TSC. Given a delta between the guest TSC T and some reference point in time R, the KVM clock is expressed as a(T-R)+r, where little r is the value of the KVM clock when the guest TSC was R, and (a) is the rate of the guest TSC.
>>
>> When set the clock with KVM_SET_CLOCK_GUEST, we are changing the values of R and r to a new point in time. Call the new ones Q and q respectively.
>>
>> But we calculate precisely (within 1ns at least) what the KVM clock would have been with the *old* formula, and adjust our new offset (q) so that at our new reference TSC value Q, the formulae give exactly the same result.
>>
>> And because the *rates* are the same, they should continue to give the same results, ±1ns.
>>
>> Or such *was* my theory, at least.
>>
>> Would be interesting to see it disproven with actual numbers for the old+new pvclock structs, so I can understand where the logic goes wrong.
>>
>> Were you using frequency scaling?
>>
>
> Oh, also please could you test the updated version I posted yesterday, from https://git.infradead.org/?p=users/dwmw2/linux.git;a=shortlog;h=refs/heads/clocks
I failed to check out your branch, instead I downloaded the patch series
from:
https://lore.kernel.org/linux-kselftest/FABCFBD0-4B76-4662-9F7B-7E1A856BBBB6@infradead.org/T/#t
However, the selftest hangs:
[Apr19 16:15] kselftest: Running tests in kvm
[Apr19 16:16] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
[ +0.000628] rcu: 78-...0: (1 GPs behind)
idle=3c8c/1/0x4000000000000000 softirq=5908/5913 fqs=14025
[ +0.000468] rcu: (detected by 104, t=60003 jiffies, g=60073,
q=3100 ncpus=128)
[ +0.000389] Sending NMI from CPU 104 to CPUs 78:
[ +0.000360] NMI backtrace for cpu 78
[ +0.000004] CPU: 78 PID: 33515 Comm: pvclock_test Tainted: G
O 6.9.0-rc1zide-l0+ #194
[ +0.000003] Hardware name: Inspur NF5280M7/NF5280M7, BIOS 05.08.01
08/18/2023
[ +0.000002] RIP: 0010:pvclock_update_vm_gtod_copy+0xb5/0x200 [kvm]
[ +0.000079] Code: ea 83 e1 40 48 0f 45 c2 31 d2 48 3d 00 94 35 77 76
0e 48 d1 e8 83 ea 01 48 3d 00 94 35 77 77 f2 48 3d 00 ca 9a 3b 89 c1 77
0d <01> c9 83 c2 01 81 f9 00 ca 9a 3b 76 f3 88 93 8c 95 00 00 31 c0 ba
[ +0.000002] RSP: 0018:ff368a58cfe07e30 EFLAGS: 00000087
[ +0.000002] RAX: 0000000000000000 RBX: ff368a58e0ccd000 RCX:
0000000000000000
[ +0.000001] RDX: 000000005ca49a49 RSI: 00000000000029aa RDI:
0000019ee77a1c00
[ +0.000002] RBP: ff368a58cfe07e50 R08: 0000000000000001 R09:
0000000000000000
[ +0.000000] R10: ff26383d853ab400 R11: 0000000000000002 R12:
0000000000000000
[ +0.000001] R13: ff368a58e0cd6400 R14: 0000000000000293 R15:
ff368a58e0cd69f0
[ +0.000001] FS: 00007f6946473740(0000) GS:ff26384c7fb80000(0000)
knlGS:0000000000000000
[ +0.000001] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ +0.000001] CR2: 00007f69463bd445 CR3: 000000016f466006 CR4:
0000000000f71ef0
[ +0.000001] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[ +0.000000] DR3: 0000000000000000 DR6: 00000000fffe07f0 DR7:
0000000000000400
[ +0.000001] PKRU: 55555554
[ +0.000001] Call Trace:
[ +0.000004] <NMI>
[ +0.000003] ? nmi_cpu_backtrace+0x87/0xf0
[ +0.000008] ? nmi_cpu_backtrace_handler+0x11/0x20
[ +0.000005] ? nmi_handle+0x5f/0x170
[ +0.000005] ? pvclock_update_vm_gtod_copy+0xb5/0x200 [kvm]
[ +0.000045] ? default_do_nmi+0x79/0x1a0
[ +0.000004] ? exc_nmi+0xf0/0x130
[ +0.000001] ? end_repeat_nmi+0xf/0x53
[ +0.000006] ? pvclock_update_vm_gtod_copy+0xb5/0x200 [kvm]
[ +0.000041] ? pvclock_update_vm_gtod_copy+0xb5/0x200 [kvm]
[ +0.000040] ? pvclock_update_vm_gtod_copy+0xb5/0x200 [kvm]
[ +0.000039] </NMI>
[ +0.000000] <TASK>
[ +0.000001] ? preempt_count_add+0x73/0xa0
[ +0.000004] kvm_arch_init_vm+0xf1/0x1e0 [kvm]
[ +0.000049] kvm_create_vm+0x370/0x650 [kvm]
[ +0.000036] kvm_dev_ioctl+0x88/0x180 [kvm]
[ +0.000034] __x64_sys_ioctl+0x8e/0xd0
[ +0.000007] do_syscall_64+0x5b/0x120
[ +0.000003] entry_SYSCALL_64_after_hwframe+0x6c/0x74
[ +0.000003] RIP: 0033:0x7f694631a94f
[ +0.000002] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10
00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f
05 <41> 89 c0 3d 00 f0 ff ff 77 1f 48 8b 44 24 18 64 48 2b 04 25 28 00
[ +0.000001] RSP: 002b:00007ffca91b2e50 EFLAGS: 00000246 ORIG_RAX:
0000000000000010
[ +0.000002] RAX: ffffffffffffffda RBX: 0000000000434480 RCX:
00007f694631a94f
[ +0.000001] RDX: 0000000000000000 RSI: 000000000000ae01 RDI:
0000000000000005
[ +0.000000] RBP: 0000000000000009 R08: 000000000041b198 R09:
000000000041bfbf
[ +0.000001] R10: 00007f69463d8882 R11: 0000000000000246 R12:
0000000000434480
[ +0.000000] R13: 000000000041e0f0 R14: 0000000000001000 R15:
0000000000000207
[ +0.000002] </TASK>
Powered by blists - more mailing lists