[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220614021116.1101331-3-sashal@kernel.org>
Date: Mon, 13 Jun 2022 22:11:12 -0400
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Vitaly Kuznetsov <vkuznets@...hat.com>,
Maxim Levitsky <mlevitsk@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Sasha Levin <sashal@...nel.org>, shuah@...nel.org,
seanjc@...gle.com, kvm@...r.kernel.org,
linux-kselftest@...r.kernel.org
Subject: [PATCH MANUALSEL 5.18 3/6] KVM: selftests: Make hyperv_clock selftest more stable
From: Vitaly Kuznetsov <vkuznets@...hat.com>
[ Upstream commit eae260be3a0111a28fe95923e117a55dddec0384 ]
hyperv_clock doesn't always give a stable test result, especially with
AMD CPUs. The test compares Hyper-V MSR clocksource (acquired either
with rdmsr() from within the guest or KVM_GET_MSRS from the host)
against rdtsc(). To increase the accuracy, increase the measured delay
(done with nop loop) by two orders of magnitude and take the mean rdtsc()
value before and after rdmsr()/KVM_GET_MSRS.
Reported-by: Maxim Levitsky <mlevitsk@...hat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
Reviewed-by: Maxim Levitsky <mlevitsk@...hat.com>
Tested-by: Maxim Levitsky <mlevitsk@...hat.com>
Message-Id: <20220601144322.1968742-1-vkuznets@...hat.com>
Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
tools/testing/selftests/kvm/x86_64/hyperv_clock.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_clock.c b/tools/testing/selftests/kvm/x86_64/hyperv_clock.c
index e0b2bb1339b1..3330fb183c68 100644
--- a/tools/testing/selftests/kvm/x86_64/hyperv_clock.c
+++ b/tools/testing/selftests/kvm/x86_64/hyperv_clock.c
@@ -44,7 +44,7 @@ static inline void nop_loop(void)
{
int i;
- for (i = 0; i < 1000000; i++)
+ for (i = 0; i < 100000000; i++)
asm volatile("nop");
}
@@ -56,12 +56,14 @@ static inline void check_tsc_msr_rdtsc(void)
tsc_freq = rdmsr(HV_X64_MSR_TSC_FREQUENCY);
GUEST_ASSERT(tsc_freq > 0);
- /* First, check MSR-based clocksource */
+ /* For increased accuracy, take mean rdtsc() before and afrer rdmsr() */
r1 = rdtsc();
t1 = rdmsr(HV_X64_MSR_TIME_REF_COUNT);
+ r1 = (r1 + rdtsc()) / 2;
nop_loop();
r2 = rdtsc();
t2 = rdmsr(HV_X64_MSR_TIME_REF_COUNT);
+ r2 = (r2 + rdtsc()) / 2;
GUEST_ASSERT(r2 > r1 && t2 > t1);
@@ -181,12 +183,14 @@ static void host_check_tsc_msr_rdtsc(struct kvm_vm *vm)
tsc_freq = vcpu_get_msr(vm, VCPU_ID, HV_X64_MSR_TSC_FREQUENCY);
TEST_ASSERT(tsc_freq > 0, "TSC frequency must be nonzero");
- /* First, check MSR-based clocksource */
+ /* For increased accuracy, take mean rdtsc() before and afrer ioctl */
r1 = rdtsc();
t1 = vcpu_get_msr(vm, VCPU_ID, HV_X64_MSR_TIME_REF_COUNT);
+ r1 = (r1 + rdtsc()) / 2;
nop_loop();
r2 = rdtsc();
t2 = vcpu_get_msr(vm, VCPU_ID, HV_X64_MSR_TIME_REF_COUNT);
+ r2 = (r2 + rdtsc()) / 2;
TEST_ASSERT(t2 > t1, "Time reference MSR is not monotonic (%ld <= %ld)", t1, t2);
--
2.35.1
Powered by blists - more mailing lists