lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240126085444.324918-41-xiong.y.zhang@linux.intel.com>
Date: Fri, 26 Jan 2024 16:54:43 +0800
From: Xiong Zhang <xiong.y.zhang@...ux.intel.com>
To: seanjc@...gle.com,
	pbonzini@...hat.com,
	peterz@...radead.org,
	mizhang@...gle.com,
	kan.liang@...el.com,
	zhenyuw@...ux.intel.com,
	dapeng1.mi@...ux.intel.com,
	jmattson@...gle.com
Cc: kvm@...r.kernel.org,
	linux-perf-users@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	zhiyuan.lv@...el.com,
	eranian@...gle.com,
	irogers@...gle.com,
	samantha.alt@...el.com,
	like.xu.linux@...il.com,
	chao.gao@...el.com,
	xiong.y.zhang@...ux.intel.com
Subject: [RFC PATCH 40/41] KVM: x86/pmu: Separate passthrough PMU logic in set/get_msr() from non-passthrough vPMU

From: Mingwei Zhang <mizhang@...gle.com>

Separate passthrough PMU logic from non-passthrough vPMU code. There are
two places in passthrough vPMU when set/get_msr() may call into the
existing non-passthrough vPMU code: 1) set/get counters; 2) set global_ctrl
MSR.

In the former case, non-passthrough vPMU will call into
pmc_{read,write}_counter() which wires to the perf API. Update these
functions to avoid the perf API invocation.

The 2nd case is where global_ctrl MSR writes invokes reprogram_counters()
which will invokes the non-passthrough PMU logic. So use pmu->passthrough
flag to wrap out the call.

Signed-off-by: Mingwei Zhang <mizhang@...gle.com>
---
 arch/x86/kvm/pmu.c |  4 +++-
 arch/x86/kvm/pmu.h | 10 +++++++++-
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index 9e62e96fe48a..de653a67ba93 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -652,7 +652,9 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		if (pmu->global_ctrl != data) {
 			diff = pmu->global_ctrl ^ data;
 			pmu->global_ctrl = data;
-			reprogram_counters(pmu, diff);
+			/* Passthrough vPMU never reprogram counters. */
+			if (!pmu->passthrough)
+				reprogram_counters(pmu, diff);
 		}
 		break;
 	case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index 0fc37a06fe48..ab8d4a8e58a8 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -70,6 +70,9 @@ static inline u64 pmc_read_counter(struct kvm_pmc *pmc)
 	u64 counter, enabled, running;
 
 	counter = pmc->counter;
+	if (pmc_to_pmu(pmc)->passthrough)
+		return counter & pmc_bitmask(pmc);
+
 	if (pmc->perf_event && !pmc->is_paused)
 		counter += perf_event_read_value(pmc->perf_event,
 						 &enabled, &running);
@@ -79,7 +82,12 @@ static inline u64 pmc_read_counter(struct kvm_pmc *pmc)
 
 static inline void pmc_write_counter(struct kvm_pmc *pmc, u64 val)
 {
-	pmc->counter += val - pmc_read_counter(pmc);
+	/* In passthrough PMU, counter value is the actual value in HW. */
+	if (pmc_to_pmu(pmc)->passthrough)
+		pmc->counter = val;
+	else
+		pmc->counter += val - pmc_read_counter(pmc);
+
 	pmc->counter &= pmc_bitmask(pmc);
 }
 
-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ