lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260121225438.3908422-3-jmattson@google.com>
Date: Wed, 21 Jan 2026 14:54:00 -0800
From: Jim Mattson <jmattson@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>, Paolo Bonzini <pbonzini@...hat.com>, 
	Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, 
	Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org, 
	"H. Peter Anvin" <hpa@...or.com>, Peter Zijlstra <peterz@...radead.org>, 
	Arnaldo Carvalho de Melo <acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>, 
	Mark Rutland <mark.rutland@....com>, 
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>, 
	Ian Rogers <irogers@...gle.com>, Adrian Hunter <adrian.hunter@...el.com>, 
	James Clark <james.clark@...aro.org>, Shuah Khan <shuah@...nel.org>, kvm@...r.kernel.org, 
	linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org, 
	linux-kselftest@...r.kernel.org
Cc: Jim Mattson <jmattson@...gle.com>
Subject: [PATCH 2/6] KVM: x86/pmu: Disable HG_ONLY events as appropriate for
 current vCPU state

Introduce amd_pmu_dormant_hg_event(), which determines whether an AMD PMC
should be dormant (i.e. not count) based on the guest's Host-Only and
Guest-Only event selector bits and the current vCPU state.

Update amd_pmu_set_eventsel_hw() to clear the event selector's enable bit
when the event is dormant.

Signed-off-by: Jim Mattson <jmattson@...gle.com>
---
 arch/x86/include/asm/perf_event.h |  2 ++
 arch/x86/kvm/svm/pmu.c            | 23 +++++++++++++++++++++++
 2 files changed, 25 insertions(+)

diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index 0d9af4135e0a..7649d79d91a6 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -58,6 +58,8 @@
 #define AMD64_EVENTSEL_INT_CORE_ENABLE			(1ULL << 36)
 #define AMD64_EVENTSEL_GUESTONLY			(1ULL << 40)
 #define AMD64_EVENTSEL_HOSTONLY				(1ULL << 41)
+#define AMD64_EVENTSEL_HG_ONLY				\
+	(AMD64_EVENTSEL_HOSTONLY | AMD64_EVENTSEL_GUESTONLY)
 
 #define AMD64_EVENTSEL_INT_CORE_SEL_SHIFT		37
 #define AMD64_EVENTSEL_INT_CORE_SEL_MASK		\
diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
index 33c139b23a9e..f619417557f9 100644
--- a/arch/x86/kvm/svm/pmu.c
+++ b/arch/x86/kvm/svm/pmu.c
@@ -147,10 +147,33 @@ static int amd_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	return 1;
 }
 
+static bool amd_pmu_dormant_hg_event(struct kvm_pmc *pmc)
+{
+	u64 hg_only = pmc->eventsel & AMD64_EVENTSEL_HG_ONLY;
+	struct kvm_vcpu *vcpu = pmc->vcpu;
+
+	if (hg_only == 0)
+		/* Not an HG_ONLY event */
+		return false;
+
+	if (!(vcpu->arch.efer & EFER_SVME))
+		/* HG_ONLY bits are ignored when SVME is clear */
+		return false;
+
+	/* Always active if both HG_ONLY bits are set */
+	if (hg_only == AMD64_EVENTSEL_HG_ONLY)
+		return false;
+
+	return !!(hg_only & AMD64_EVENTSEL_HOSTONLY) == is_guest_mode(vcpu);
+}
+
 static void amd_pmu_set_eventsel_hw(struct kvm_pmc *pmc)
 {
 	pmc->eventsel_hw = (pmc->eventsel & ~AMD64_EVENTSEL_HOSTONLY) |
 		AMD64_EVENTSEL_GUESTONLY;
+
+	if (amd_pmu_dormant_hg_event(pmc))
+		pmc->eventsel_hw &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
 }
 
 static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
-- 
2.52.0.457.g6b5491de43-goog


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ