lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aD96rn78BSUDbEu1@linux.dev>
Date: Tue, 3 Jun 2025 15:43:58 -0700
From: Oliver Upton <oliver.upton@...ux.dev>
To: Colton Lewis <coltonlewis@...gle.com>
Cc: kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
	Jonathan Corbet <corbet@....net>,
	Russell King <linux@...linux.org.uk>,
	Catalin Marinas <catalin.marinas@....com>,
	Will Deacon <will@...nel.org>, Marc Zyngier <maz@...nel.org>,
	Joey Gouly <joey.gouly@....com>,
	Suzuki K Poulose <suzuki.poulose@....com>,
	Zenghui Yu <yuzenghui@...wei.com>,
	Mark Rutland <mark.rutland@....com>, Shuah Khan <shuah@...nel.org>,
	linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
	linux-perf-users@...r.kernel.org, linux-kselftest@...r.kernel.org
Subject: Re: [PATCH 00/17] ARM64 PMU Partitioning

On Mon, Jun 02, 2025 at 07:26:45PM +0000, Colton Lewis wrote:
> Caveats:
> 
> Because the most consistent and performant thing to do was untrap
> PMCR_EL0, the number of counters visible to the guest via PMCR_EL0.N
> is always equal to the value KVM sets for MDCR_EL2.HPMN. Previously
> allowed writes to PMCR_EL0.N via {GET,SET}_ONE_REG no longer affect
> the guest.
> 
> These improvements come at a cost to 7-35 new registers that must be
> swapped at every vcpu_load and vcpu_put if the feature is enabled. I
> have been informed KVM would like to avoid paying this cost when
> possible.
> 
> One solution is to make the trapping changes and context swapping lazy
> such that the trapping changes and context swapping only take place
> after the guest has actually accessed the PMU so guests that never
> access the PMU never pay the cost.

You should try and model this similar to how we manage the debug
breakpoints/watchpoints. In that case the debug register context is
loaded if either:

 (1) Self-hosted debug is actively in use by the guest, or

 (2) The guest has accessed a debug register since the last vcpu_load()

> This is not done here because it is not crucial to the primary
> functionality and I thought review would be more productive as soon as
> I had something complete enough for reviewers to easily play with.
> 
> However, this or any better ideas are on the table for inclusion in
> future re-rolls.

One of the other things that I'd like to see is if we can pare down the
amount of CPU feature dependencies for a partitioned PMU. Annoyingly,
there aren't a lot of machines out there with FEAT_FGT yet, and you
should be able to make all of this work in VHE + FEAT_PMUv3p1.

That "just" comes at the cost of extra traps (leaving TPM and
potentially TPMCR set). You can mitigate the cost of this by emulating
accesses in the fast path that don't need to go out to a kernel context
to be serviced. Same goes for requiring FEAT_HPMN0 to expose 0 event
counters, we can fall back to TPM traps if needed.

Taking perf out of the picture should still give you a significant
reduction vPMU overheads.

Last thing, let's table guest support for FEAT_PMUv3_ICNTR for the time
being. Yes, it falls in the KVM-owned range, but we can just handle it
with a fine-grained undef for now. Once the core infrastructure has
landed upstream we can start layering new features into the partitioned
implementation.

Thanks,
Oliver

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ