lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aUH_7yYZsmFlRvEc@kernel.org>
Date: Tue, 16 Dec 2025 16:57:19 -0800
From: Oliver Upton <oupton@...nel.org>
To: Colton Lewis <coltonlewis@...gle.com>
Cc: kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
	Jonathan Corbet <corbet@....net>,
	Russell King <linux@...linux.org.uk>,
	Catalin Marinas <catalin.marinas@....com>,
	Will Deacon <will@...nel.org>, Marc Zyngier <maz@...nel.org>,
	Oliver Upton <oliver.upton@...ux.dev>,
	Mingwei Zhang <mizhang@...gle.com>, Joey Gouly <joey.gouly@....com>,
	Suzuki K Poulose <suzuki.poulose@....com>,
	Zenghui Yu <yuzenghui@...wei.com>,
	Mark Rutland <mark.rutland@....com>, Shuah Khan <shuah@...nel.org>,
	Ganapatrao Kulkarni <gankulkarni@...amperecomputing.com>,
	linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
	linux-perf-users@...r.kernel.org, linux-kselftest@...r.kernel.org
Subject: Re: [PATCH v5 18/24] KVM: arm64: Enforce PMU event filter at
 vcpu_load()

Re-reading this patch...

On Tue, Dec 09, 2025 at 08:51:15PM +0000, Colton Lewis wrote:
> The KVM API for event filtering says that counters do not count when
> blocked by the event filter. To enforce that, the event filter must be
> rechecked on every load since it might have changed since the last
> time the guest wrote a value.

Just directly state that this is guarding against userspace programming
an unsupported event ID.

> +static void kvm_pmu_apply_event_filter(struct kvm_vcpu *vcpu)
> +{
> +	struct arm_pmu *pmu = vcpu->kvm->arch.arm_pmu;
> +	u64 evtyper_set = ARMV8_PMU_EXCLUDE_EL0 |
> +		ARMV8_PMU_EXCLUDE_EL1;
> +	u64 evtyper_clr = ARMV8_PMU_INCLUDE_EL2;
> +	u8 i;
> +	u64 val;
> +	u64 evsel;
> +
> +	if (!pmu)
> +		return;
> +
> +	for (i = 0; i < pmu->hpmn_max; i++) {

Iterate the bitmask of counters and you'll handle the cycle counter 'for
free'.

<snip>

> +		val = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);
> +		evsel = val & kvm_pmu_event_mask(vcpu->kvm);
> +
> +		if (vcpu->kvm->arch.pmu_filter &&
> +		    !test_bit(evsel, vcpu->kvm->arch.pmu_filter))
> +			val |= evtyper_set;
> +
> +		val &= ~evtyper_clr;
> +		write_pmevtypern(i, val);

</snip>

This all needs to be shared with writethrough_pmevtyper() instead of
open-coding the same thing.

Thanks,
Oliver

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ