[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ee07nq8i.fsf@redhat.com>
Date: Thu, 02 Jun 2022 15:34:37 +0200
From: Vitaly Kuznetsov <vkuznets@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
Maxim Levitsky <mlevitsk@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: selftests: Make hyperv_clock selftest more stable
Sean Christopherson <seanjc@...gle.com> writes:
> On Wed, Jun 01, 2022, Vitaly Kuznetsov wrote:
>> hyperv_clock doesn't always give a stable test result, especially with
>> AMD CPUs. The test compares Hyper-V MSR clocksource (acquired either
>> with rdmsr() from within the guest or KVM_GET_MSRS from the host)
>> against rdtsc(). To increase the accuracy, increase the measured delay
>> (done with nop loop) by two orders of magnitude and take the mean rdtsc()
>> value before and after rdmsr()/KVM_GET_MSRS.
>
> Rather than "fixing" the test by reducing the impact of noise, can we first try
> to reduce the noise itself? E.g. pin the test to a single CPU, redo the measurement
> if the test is interrupted (/proc/interrupts?), etc... Bonus points if that can
> be implemented as a helper or pair of helpers so that other tests that want to
> measure latency/time don't need to reinvent the wheel.
While I'm not certain task migration to another CPU was always the
problem here (maybe the measured interval is too short anyway), I agree
these are good ideas, I'll look into them, thanks!
--
Vitaly
Powered by blists - more mailing lists