[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240109230250.424295-30-seanjc@google.com>
Date: Tue, 9 Jan 2024 15:02:49 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>, Paolo Bonzini <pbonzini@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Kan Liang <kan.liang@...ux.intel.com>, Dapeng Mi <dapeng1.mi@...ux.intel.com>,
Jim Mattson <jmattson@...gle.com>, Jinrong Liang <cloudliang@...cent.com>,
Aaron Lewis <aaronlewis@...gle.com>, Like Xu <likexu@...cent.com>
Subject: [PATCH v10 29/29] KVM: selftests: Extend PMU counters test to
validate RDPMC after WRMSR
Extend the read/write PMU counters subtest to verify that RDPMC also reads
back the written value. Opportunsitically verify that attempting to use
the "fast" mode of RDPMC fails, as the "fast" flag is only supported by
non-architectural PMUs, which KVM doesn't virtualize.
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
.../selftests/kvm/x86_64/pmu_counters_test.c | 41 +++++++++++++++++++
1 file changed, 41 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c
index cb808ac827ba..ae5f6042f1e8 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c
@@ -325,9 +325,30 @@ __GUEST_ASSERT(expect_gp ? vector == GP_VECTOR : !vector, \
"Expected " #insn "(0x%x) to yield 0x%lx, got 0x%lx", \
msr, expected_val, val);
+static void guest_test_rdpmc(uint32_t rdpmc_idx, bool expect_success,
+ uint64_t expected_val)
+{
+ uint8_t vector;
+ uint64_t val;
+
+ vector = rdpmc_safe(rdpmc_idx, &val);
+ GUEST_ASSERT_PMC_MSR_ACCESS(RDPMC, rdpmc_idx, !expect_success, vector);
+ if (expect_success)
+ GUEST_ASSERT_PMC_VALUE(RDPMC, rdpmc_idx, val, expected_val);
+
+ if (!is_forced_emulation_enabled)
+ return;
+
+ vector = rdpmc_safe_fep(rdpmc_idx, &val);
+ GUEST_ASSERT_PMC_MSR_ACCESS(RDPMC, rdpmc_idx, !expect_success, vector);
+ if (expect_success)
+ GUEST_ASSERT_PMC_VALUE(RDPMC, rdpmc_idx, val, expected_val);
+}
+
static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_counters,
uint8_t nr_counters, uint32_t or_mask)
{
+ const bool pmu_has_fast_mode = !guest_get_pmu_version();
uint8_t i;
for (i = 0; i < nr_possible_counters; i++) {
@@ -352,6 +373,7 @@ static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_counters
const uint64_t expected_val = expect_success ? test_val : 0;
const bool expect_gp = !expect_success && msr != MSR_P6_PERFCTR0 &&
msr != MSR_P6_PERFCTR1;
+ uint32_t rdpmc_idx;
uint8_t vector;
uint64_t val;
@@ -365,6 +387,25 @@ static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_counters
if (!expect_gp)
GUEST_ASSERT_PMC_VALUE(RDMSR, msr, val, expected_val);
+ /*
+ * Redo the read tests with RDPMC, which has different indexing
+ * semantics and additional capabilities.
+ */
+ rdpmc_idx = i;
+ if (base_msr == MSR_CORE_PERF_FIXED_CTR0)
+ rdpmc_idx |= INTEL_RDPMC_FIXED;
+
+ guest_test_rdpmc(rdpmc_idx, expect_success, expected_val);
+
+ /*
+ * KVM doesn't support non-architectural PMUs, i.e. it should
+ * impossible to have fast mode RDPMC. Verify that attempting
+ * to use fast RDPMC always #GPs.
+ */
+ GUEST_ASSERT(!expect_success || !pmu_has_fast_mode);
+ rdpmc_idx |= INTEL_RDPMC_FAST;
+ guest_test_rdpmc(rdpmc_idx, false, -1ull);
+
vector = wrmsr_safe(msr, 0);
GUEST_ASSERT_PMC_MSR_ACCESS(WRMSR, msr, expect_gp, vector);
}
--
2.43.0.472.g3155946c3a-goog
Powered by blists - more mailing lists