[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251209205121.1871534-12-coltonlewis@google.com>
Date: Tue, 9 Dec 2025 20:51:08 +0000
From: Colton Lewis <coltonlewis@...gle.com>
To: kvm@...r.kernel.org
Cc: Paolo Bonzini <pbonzini@...hat.com>, Jonathan Corbet <corbet@....net>,
Russell King <linux@...linux.org.uk>, Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>, Marc Zyngier <maz@...nel.org>, Oliver Upton <oliver.upton@...ux.dev>,
Mingwei Zhang <mizhang@...gle.com>, Joey Gouly <joey.gouly@....com>,
Suzuki K Poulose <suzuki.poulose@....com>, Zenghui Yu <yuzenghui@...wei.com>,
Mark Rutland <mark.rutland@....com>, Shuah Khan <shuah@...nel.org>,
Ganapatrao Kulkarni <gankulkarni@...amperecomputing.com>, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
kvmarm@...ts.linux.dev, linux-perf-users@...r.kernel.org,
linux-kselftest@...r.kernel.org, Colton Lewis <coltonlewis@...gle.com>
Subject: [PATCH v5 11/24] KVM: arm64: Writethrough trapped PMEVTYPER register
With FGT in place, the remaining trapped registers need to be written
through to the underlying physical registers as well as the virtual
ones. Failing to do this means guest writes will not take effect when
expected.
For the PMEVTYPER register, take care to enforce KVM's PMU event
filter. Do that by setting the bits to exclude EL1 and EL0 when an
event is not present in the filter and clearing the bit to include EL2
always.
Note the virtual register is always assigned the value specified by
the guest to hide the setting of those bits.
Signed-off-by: Colton Lewis <coltonlewis@...gle.com>
---
arch/arm64/kvm/sys_regs.c | 34 +++++++++++++++++++++++++++++++++-
1 file changed, 33 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c636840b1f6f9..0c9596325519b 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1166,6 +1166,36 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
return true;
}
+static bool writethrough_pmevtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ u64 reg, u64 idx)
+{
+ u64 eventsel;
+ u64 val = p->regval;
+ u64 evtyper_set = ARMV8_PMU_EXCLUDE_EL0 |
+ ARMV8_PMU_EXCLUDE_EL1;
+ u64 evtyper_clr = ARMV8_PMU_INCLUDE_EL2;
+
+ __vcpu_assign_sys_reg(vcpu, reg, val);
+
+ if (idx == ARMV8_PMU_CYCLE_IDX)
+ eventsel = ARMV8_PMUV3_PERFCTR_CPU_CYCLES;
+ else
+ eventsel = val & kvm_pmu_event_mask(vcpu->kvm);
+
+ if (vcpu->kvm->arch.pmu_filter &&
+ !test_bit(eventsel, vcpu->kvm->arch.pmu_filter))
+ val |= evtyper_set;
+
+ val &= ~evtyper_clr;
+
+ if (idx == ARMV8_PMU_CYCLE_IDX)
+ write_pmccfiltr(val);
+ else
+ write_pmevtypern(idx, val);
+
+ return true;
+}
+
static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
@@ -1192,7 +1222,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
if (!pmu_counter_idx_valid(vcpu, idx))
return false;
- if (p->is_write) {
+ if (kvm_vcpu_pmu_is_partitioned(vcpu) && p->is_write) {
+ writethrough_pmevtyper(vcpu, p, reg, idx);
+ } else if (p->is_write) {
kvm_pmu_set_counter_event_type(vcpu, p->regval, idx);
kvm_vcpu_pmu_restore_guest(vcpu);
} else {
--
2.52.0.239.gd5f0c6e74e-goog
Powered by blists - more mailing lists