lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aTia74R74upcsMEA@kernel.org>
Date: Tue, 9 Dec 2025 13:55:59 -0800
From: Oliver Upton <oupton@...nel.org>
To: Colton Lewis <coltonlewis@...gle.com>
Cc: kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
	Jonathan Corbet <corbet@....net>,
	Russell King <linux@...linux.org.uk>,
	Catalin Marinas <catalin.marinas@....com>,
	Will Deacon <will@...nel.org>, Marc Zyngier <maz@...nel.org>,
	Oliver Upton <oliver.upton@...ux.dev>,
	Mingwei Zhang <mizhang@...gle.com>, Joey Gouly <joey.gouly@....com>,
	Suzuki K Poulose <suzuki.poulose@....com>,
	Zenghui Yu <yuzenghui@...wei.com>,
	Mark Rutland <mark.rutland@....com>, Shuah Khan <shuah@...nel.org>,
	Ganapatrao Kulkarni <gankulkarni@...amperecomputing.com>,
	linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
	linux-perf-users@...r.kernel.org, linux-kselftest@...r.kernel.org
Subject: Re: [PATCH v5 17/24] KVM: arm64: Context swap Partitioned PMU guest
 registers

On Tue, Dec 09, 2025 at 08:51:14PM +0000, Colton Lewis wrote:
> +/**
> + * kvm_pmu_load() - Load untrapped PMU registers
> + * @vcpu: Pointer to struct kvm_vcpu
> + *
> + * Load all untrapped PMU registers from the VCPU into the PCPU. Mask
> + * to only bits belonging to guest-reserved counters and leave
> + * host-reserved counters alone in bitmask registers.
> + */
> +void kvm_pmu_load(struct kvm_vcpu *vcpu)
> +{
> +	struct arm_pmu *pmu;
> +	u64 mask;
> +	u8 i;
> +	u64 val;
> +

Assert that preemption is disabled.

> +	/*
> +	 * If we aren't using FGT then we are trapping everything
> +	 * anyway, so no need to bother with the swap.
> +	 */
> +	if (!kvm_vcpu_pmu_use_fgt(vcpu))
> +		return;

Uhh... Then how do events count in this case?

The absence of FEAT_FGT shouldn't affect the residence of the guest PMU
context. We just need to handle the extra traps, ideally by reading the
PMU registers directly as a fast path exit handler.

> +	pmu = vcpu->kvm->arch.arm_pmu;
> +
> +	for (i = 0; i < pmu->hpmn_max; i++) {
> +		val = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i);
> +		write_pmevcntrn(i, val);
> +	}
> +
> +	val = __vcpu_sys_reg(vcpu, PMCCNTR_EL0);
> +	write_pmccntr(val);
> +
> +	val = __vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> +	write_pmuserenr(val);

What about the host's value for PMUSERENR?

> +	val = __vcpu_sys_reg(vcpu, PMSELR_EL0);
> +	write_pmselr(val);

PMSELR_EL0 needs to be switched late, e.g. at sysreg_restore_guest_state_vhe().
Even though the host doesn't currently use the selector-based accessor,
I'd prefer we not load things that'd affect the host context until we're
about to enter the guest.

> +	/* Save only the stateful writable bits. */
> +	val = __vcpu_sys_reg(vcpu, PMCR_EL0);
> +	mask = ARMV8_PMU_PMCR_MASK &
> +		~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C);
> +	write_pmcr(val & mask);
> +
> +	/*
> +	 * When handling these:
> +	 * 1. Apply only the bits for guest counters (indicated by mask)
> +	 * 2. Use the different registers for set and clear
> +	 */
> +	mask = kvm_pmu_guest_counter_mask(pmu);
> +
> +	val = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
> +	write_pmcntenset(val & mask);
> +	write_pmcntenclr(~val & mask);
> +
> +	val = __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
> +	write_pmintenset(val & mask);
> +	write_pmintenclr(~val & mask);

Is this safe? What happens if we put the PMU into an overflow condition?

> +}
> +
> +/**
> + * kvm_pmu_put() - Put untrapped PMU registers
> + * @vcpu: Pointer to struct kvm_vcpu
> + *
> + * Put all untrapped PMU registers from the VCPU into the PCPU. Mask
> + * to only bits belonging to guest-reserved counters and leave
> + * host-reserved counters alone in bitmask registers.
> + */
> +void kvm_pmu_put(struct kvm_vcpu *vcpu)
> +{
> +	struct arm_pmu *pmu;
> +	u64 mask;
> +	u8 i;
> +	u64 val;
> +
> +	/*
> +	 * If we aren't using FGT then we are trapping everything
> +	 * anyway, so no need to bother with the swap.
> +	 */
> +	if (!kvm_vcpu_pmu_use_fgt(vcpu))
> +		return;
> +
> +	pmu = vcpu->kvm->arch.arm_pmu;
> +
> +	for (i = 0; i < pmu->hpmn_max; i++) {
> +		val = read_pmevcntrn(i);
> +		__vcpu_assign_sys_reg(vcpu, PMEVCNTR0_EL0 + i, val);
> +	}
> +
> +	val = read_pmccntr();
> +	__vcpu_assign_sys_reg(vcpu, PMCCNTR_EL0, val);
> +
> +	val = read_pmuserenr();
> +	__vcpu_assign_sys_reg(vcpu, PMUSERENR_EL0, val);
> +
> +	val = read_pmselr();
> +	__vcpu_assign_sys_reg(vcpu, PMSELR_EL0, val);
> +
> +	val = read_pmcr();
> +	__vcpu_assign_sys_reg(vcpu, PMCR_EL0, val);
> +
> +	/* Mask these to only save the guest relevant bits. */
> +	mask = kvm_pmu_guest_counter_mask(pmu);
> +
> +	val = read_pmcntenset();
> +	__vcpu_assign_sys_reg(vcpu, PMCNTENSET_EL0, val & mask);
> +
> +	val = read_pmintenset();
> +	__vcpu_assign_sys_reg(vcpu, PMINTENSET_EL1, val & mask);

What if the PMU is in an overflow state at this point?

Thanks,
Oliver

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ