lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220713122507.29236-7-likexu@tencent.com>
Date:   Wed, 13 Jul 2022 20:25:05 +0800
From:   Like Xu <like.xu.linux@...il.com>
To:     Sean Christopherson <seanjc@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>
Cc:     Jim Mattson <jmattson@...gle.com>, linux-kernel@...r.kernel.org,
        kvm@...r.kernel.org, Like Xu <likexu@...cent.com>
Subject: [PATCH 6/7] KVM: x86/pmu: Defer reprogram_counter() to kvm_pmu_handle_event()

From: Like Xu <likexu@...cent.com>

During a KVM-trap from vm-exit to vm-entry, requests from different
sources will try to create one or more perf_events via reprogram_counter(),
which will allow some predecessor actions to be undone posteriorly,
especially repeated calls to some perf subsystem interfaces. These
repetitive calls can be omitted because only the final state of the
perf_event and the hardware resources it occupies will take effect
for the guest right before the vm-entry.

To realize this optimization, KVM marks the creation requirements via
reprogram_pmi, and then defers the actual execution with the help of
vcpu KVM_REQ_PMU request.

Opportunistically update a comment for pmu->reprogram_pmi.

Signed-off-by: Like Xu <likexu@...cent.com>
---
 arch/x86/kvm/pmu.c | 17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index 2c03fe208093..681d3ac8d75c 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -101,7 +101,7 @@ static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi)
 	struct kvm_pmu *pmu = pmc_to_pmu(pmc);
 	bool skip_pmi = false;
 
-	/* Ignore counters that have been reprogrammed already. */
+	/* Ignore counters that have not been reprogrammed. */
 	if (test_and_set_bit(pmc->idx, pmu->reprogram_pmi))
 		return;
 
@@ -289,6 +289,13 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc)
 }
 
 void reprogram_counter(struct kvm_pmc *pmc)
+{
+	__set_bit(pmc->idx, pmc_to_pmu(pmc)->reprogram_pmi);
+	kvm_make_request(KVM_REQ_PMU, pmc->vcpu);
+}
+EXPORT_SYMBOL_GPL(reprogram_counter);
+
+static void __reprogram_counter(struct kvm_pmc *pmc)
 {
 	struct kvm_pmu *pmu = pmc_to_pmu(pmc);
 	u64 eventsel = pmc->eventsel;
@@ -330,7 +337,6 @@ void reprogram_counter(struct kvm_pmc *pmc)
 			      !(eventsel & ARCH_PERFMON_EVENTSEL_OS),
 			      eventsel & ARCH_PERFMON_EVENTSEL_INT);
 }
-EXPORT_SYMBOL_GPL(reprogram_counter);
 
 void kvm_pmu_handle_event(struct kvm_vcpu *vcpu)
 {
@@ -340,11 +346,12 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu)
 	for_each_set_bit(bit, pmu->reprogram_pmi, X86_PMC_IDX_MAX) {
 		struct kvm_pmc *pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, bit);
 
-		if (unlikely(!pmc || !pmc->perf_event)) {
+		if (unlikely(!pmc)) {
 			clear_bit(bit, pmu->reprogram_pmi);
 			continue;
 		}
-		reprogram_counter(pmc);
+
+		__reprogram_counter(pmc);
 	}
 
 	/*
@@ -522,7 +529,7 @@ static void kvm_pmu_incr_counter(struct kvm_pmc *pmc)
 	prev_count = pmc->counter;
 	pmc->counter = (pmc->counter + 1) & pmc_bitmask(pmc);
 
-	reprogram_counter(pmc);
+	__reprogram_counter(pmc);
 	if (pmc->counter < prev_count)
 		__kvm_perf_overflow(pmc, false);
 }
-- 
2.37.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ