[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250806195706.1650976-34-seanjc@google.com>
Date: Wed, 6 Aug 2025 12:56:55 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Marc Zyngier <maz@...nel.org>, Oliver Upton <oliver.upton@...ux.dev>,
Tianrui Zhao <zhaotianrui@...ngson.cn>, Bibo Mao <maobibo@...ngson.cn>,
Huacai Chen <chenhuacai@...nel.org>, Anup Patel <anup@...infault.org>,
Paul Walmsley <paul.walmsley@...ive.com>, Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>, Xin Li <xin@...or.com>, "H. Peter Anvin" <hpa@...or.com>,
Andy Lutomirski <luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>,
Sean Christopherson <seanjc@...gle.com>, Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
kvm@...r.kernel.org, loongarch@...ts.linux.dev, kvm-riscv@...ts.infradead.org,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-perf-users@...r.kernel.org, Kan Liang <kan.liang@...ux.intel.com>,
Yongwei Ma <yongwei.ma@...el.com>, Mingwei Zhang <mizhang@...gle.com>,
Xiong Zhang <xiong.y.zhang@...ux.intel.com>, Sandipan Das <sandipan.das@....com>,
Dapeng Mi <dapeng1.mi@...ux.intel.com>
Subject: [PATCH v5 33/44] KVM: x86/pmu: Bypass perf checks when emulating
mediated PMU counter accesses
From: Dapeng Mi <dapeng1.mi@...ux.intel.com>
When emulating a PMC counter read or write for a mediated PMU, bypass the
perf checks and emulated_counter logic as the counters aren't proxied
through perf, i.e. pmc->counter always holds the guest's up-to-date value,
and thus there's no need to defer emulated overflow checks.
Suggested-by: Sean Christopherson <seanjc@...gle.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@...ux.intel.com>
Co-developed-by: Mingwei Zhang <mizhang@...gle.com>
Signed-off-by: Mingwei Zhang <mizhang@...gle.com>
[sean: split from event filtering change, write shortlog+changelog]
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
arch/x86/kvm/pmu.c | 5 +++++
arch/x86/kvm/pmu.h | 3 +++
2 files changed, 8 insertions(+)
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index 817ef852bdf9..082d2905882b 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -377,6 +377,11 @@ static void pmc_update_sample_period(struct kvm_pmc *pmc)
void pmc_write_counter(struct kvm_pmc *pmc, u64 val)
{
+ if (kvm_vcpu_has_mediated_pmu(pmc->vcpu)) {
+ pmc->counter = val & pmc_bitmask(pmc);
+ return;
+ }
+
/*
* Drop any unconsumed accumulated counts, the WRMSR is a write, not a
* read-modify-write. Adjust the counter value so that its value is
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index 51963a3a167a..1c9d26d60a60 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -111,6 +111,9 @@ static inline u64 pmc_read_counter(struct kvm_pmc *pmc)
{
u64 counter, enabled, running;
+ if (kvm_vcpu_has_mediated_pmu(pmc->vcpu))
+ return pmc->counter & pmc_bitmask(pmc);
+
counter = pmc->counter + pmc->emulated_counter;
if (pmc->perf_event && !pmc->is_paused)
--
2.50.1.565.gc32cd1483b-goog
Powered by blists - more mailing lists