lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230221082925.16014-1-likexu@tencent.com>
Date:   Tue, 21 Feb 2023 16:29:25 +0800
From:   Like Xu <like.xu.linux@...il.com>
To:     Sean Christopherson <seanjc@...gle.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: [PATCH kvm-unit-tests] x86/pmu: Check counter is working properly before AMD pmu init

From: Like Xu <likexu@...cent.com>

Avoid misleading pmu initialisation when vPMU is globally disabled
by performing a read/write consistency test on one of AMD's counters.
Without this check, pmu test will fail rather than be skipped.

Given some historical reasons, the expectation that AMD counter write
values lead to #GP when vPMU is globally disabled is not in line with
the architectural specification (even when not disabled, writing to AMD
reserved bits will likely not result in #GP, but more of a filtered write),
and like many mature guests, KUT can test the consistency of read and
write values against a counter MSR that must be supported to determine
if the PMU is available and continue testing.

Signed-off-by: Like Xu <likexu@...cent.com>
---
Nit:
- A generic "durable" MSRs check could be applied later if need;
- Two more tests: 20221226075412.61167-1-likexu@...cent.com;

 lib/x86/pmu.c | 16 +++++++++++++++-
 x86/pmu.c     |  3 +++
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
index 0f2afd6..1e23a17 100644
--- a/lib/x86/pmu.c
+++ b/lib/x86/pmu.c
@@ -2,6 +2,20 @@
 
 struct pmu_caps pmu;
 
+/*
+ * All AMD CPUs that support vPMU have MSR_K7_PERFCTRx, but the
+ * values written to it are discarded when vPMU is globally disabled.
+ */
+static inline bool amd_k7_counter_is_durable(void)
+{
+    u64 after, before = (1ull << PMC_DEFAULT_WIDTH) - 1;
+
+    wrmsr_safe(MSR_K7_PERFCTR0, before);
+    rdmsr_safe(MSR_K7_PERFCTR0, &after);
+
+    return before == after;
+}
+
 void pmu_init(void)
 {
 	pmu.is_intel = is_intel();
@@ -38,7 +52,7 @@ void pmu_init(void)
 			pmu.msr_global_ctl = MSR_CORE_PERF_GLOBAL_CTRL;
 			pmu.msr_global_status_clr = MSR_CORE_PERF_GLOBAL_OVF_CTRL;
 		}
-	} else {
+	} else if (amd_k7_counter_is_durable()){
 		if (this_cpu_has(X86_FEATURE_PERFCTR_CORE)) {
 			/* Performance Monitoring Version 2 Supported */
 			if (this_cpu_has(X86_FEATURE_AMD_PMU_V2)) {
diff --git a/x86/pmu.c b/x86/pmu.c
index 72c2c9c..1f041e6 100644
--- a/x86/pmu.c
+++ b/x86/pmu.c
@@ -684,6 +684,9 @@ int main(int ac, char **av)
 		gp_events_size = sizeof(intel_gp_events)/sizeof(intel_gp_events[0]);
 		report_prefix_push("Intel");
 		set_ref_cycle_expectations();
+	} else if (!pmu.nr_gp_counters) {
+		report_skip("No AMD PMU is detected!");
+		return report_summary();
 	} else {
 		gp_events_size = sizeof(amd_gp_events)/sizeof(amd_gp_events[0]);
 		gp_events = (struct pmu_event *)amd_gp_events;

base-commit: e3c5c3ef2524c58023073c0fadde2e8ae3c04ec6
-- 
2.39.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ