[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <01bbe664f61475fb6e5d409a3e7f2da304ed2560.camel@redhat.com>
Date: Tue, 07 Jun 2022 11:26:18 +0300
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>, kvm@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: selftests: Make hyperv_clock selftest more stable
On Wed, 2022-06-01 at 16:43 +0200, Vitaly Kuznetsov wrote:
> hyperv_clock doesn't always give a stable test result, especially
> with
> AMD CPUs. The test compares Hyper-V MSR clocksource (acquired either
> with rdmsr() from within the guest or KVM_GET_MSRS from the host)
> against rdtsc(). To increase the accuracy, increase the measured
> delay
> (done with nop loop) by two orders of magnitude and take the mean
> rdtsc()
> value before and after rdmsr()/KVM_GET_MSRS.
Tiny nitpick: any reason why you think that AMD is more prone
to the failure? This test was failing on my Intel's laptop as well.
>
> Reported-by: Maxim Levitsky <mlevitsk@...hat.com>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> ---
> tools/testing/selftests/kvm/x86_64/hyperv_clock.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_clock.c
> b/tools/testing/selftests/kvm/x86_64/hyperv_clock.c
> index e0b2bb1339b1..3330fb183c68 100644
> --- a/tools/testing/selftests/kvm/x86_64/hyperv_clock.c
> +++ b/tools/testing/selftests/kvm/x86_64/hyperv_clock.c
> @@ -44,7 +44,7 @@ static inline void nop_loop(void)
> {
> int i;
>
> - for (i = 0; i < 1000000; i++)
> + for (i = 0; i < 100000000; i++)
> asm volatile("nop");
> }
>
> @@ -56,12 +56,14 @@ static inline void check_tsc_msr_rdtsc(void)
> tsc_freq = rdmsr(HV_X64_MSR_TSC_FREQUENCY);
> GUEST_ASSERT(tsc_freq > 0);
>
> - /* First, check MSR-based clocksource */
> + /* For increased accuracy, take mean rdtsc() before and afrer
> rdmsr() */
> r1 = rdtsc();
> t1 = rdmsr(HV_X64_MSR_TIME_REF_COUNT);
> + r1 = (r1 + rdtsc()) / 2;
> nop_loop();
> r2 = rdtsc();
> t2 = rdmsr(HV_X64_MSR_TIME_REF_COUNT);
> + r2 = (r2 + rdtsc()) / 2;
>
> GUEST_ASSERT(r2 > r1 && t2 > t1);
>
> @@ -181,12 +183,14 @@ static void host_check_tsc_msr_rdtsc(struct
> kvm_vm *vm)
> tsc_freq = vcpu_get_msr(vm, VCPU_ID,
> HV_X64_MSR_TSC_FREQUENCY);
> TEST_ASSERT(tsc_freq > 0, "TSC frequency must be nonzero");
>
> - /* First, check MSR-based clocksource */
> + /* For increased accuracy, take mean rdtsc() before and afrer
> ioctl */
> r1 = rdtsc();
> t1 = vcpu_get_msr(vm, VCPU_ID, HV_X64_MSR_TIME_REF_COUNT);
> + r1 = (r1 + rdtsc()) / 2;
> nop_loop();
> r2 = rdtsc();
> t2 = vcpu_get_msr(vm, VCPU_ID, HV_X64_MSR_TIME_REF_COUNT);
> + r2 = (r2 + rdtsc()) / 2;
>
> TEST_ASSERT(t2 > t1, "Time reference MSR is not monotonic
> (%ld <= %ld)", t1, t2);
>
Both changes make sense, and the test doesn't fail anymore on my AMD
laptop.
Soon I'll test on my Intel laptop as well.
Reviewed-by: Maxim Levitsky <mlevitsk@...hat.com>
Tested-by: Maxim Levitsky <mlevitsk@...hat.com>
Best regards,
Maxim Levitsky
Powered by blists - more mailing lists