[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YpeOkx0gkINeKFuz@google.com>
Date: Wed, 1 Jun 2022 16:06:43 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>
Cc: kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
Maxim Levitsky <mlevitsk@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: selftests: Make hyperv_clock selftest more stable
On Wed, Jun 01, 2022, Vitaly Kuznetsov wrote:
> hyperv_clock doesn't always give a stable test result, especially with
> AMD CPUs. The test compares Hyper-V MSR clocksource (acquired either
> with rdmsr() from within the guest or KVM_GET_MSRS from the host)
> against rdtsc(). To increase the accuracy, increase the measured delay
> (done with nop loop) by two orders of magnitude and take the mean rdtsc()
> value before and after rdmsr()/KVM_GET_MSRS.
Rather than "fixing" the test by reducing the impact of noise, can we first try
to reduce the noise itself? E.g. pin the test to a single CPU, redo the measurement
if the test is interrupted (/proc/interrupts?), etc... Bonus points if that can
be implemented as a helper or pair of helpers so that other tests that want to
measure latency/time don't need to reinvent the wheel.
Powered by blists - more mailing lists