[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ebd33ad5-6a22-4155-9525-87937ee3c4e2@amd.com>
Date: Wed, 13 Aug 2025 15:26:28 +0530
From: Sandipan Das <sandipan.das@....com>
To: Sean Christopherson <seanjc@...gle.com>, Marc Zyngier <maz@...nel.org>,
Oliver Upton <oliver.upton@...ux.dev>, Tianrui Zhao
<zhaotianrui@...ngson.cn>, Bibo Mao <maobibo@...ngson.cn>,
Huacai Chen <chenhuacai@...nel.org>, Anup Patel <anup@...infault.org>,
Paul Walmsley <paul.walmsley@...ive.com>, Palmer Dabbelt
<palmer@...belt.com>, Albert Ou <aou@...s.berkeley.edu>,
Xin Li <xin@...or.com>, "H. Peter Anvin" <hpa@...or.com>,
Andy Lutomirski <luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Arnaldo Carvalho de Melo <acme@...nel.org>,
Namhyung Kim <namhyung@...nel.org>, Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
kvm@...r.kernel.org, loongarch@...ts.linux.dev,
kvm-riscv@...ts.infradead.org, linux-riscv@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org,
Kan Liang <kan.liang@...ux.intel.com>, Yongwei Ma <yongwei.ma@...el.com>,
Mingwei Zhang <mizhang@...gle.com>,
Xiong Zhang <xiong.y.zhang@...ux.intel.com>,
Dapeng Mi <dapeng1.mi@...ux.intel.com>
Subject: Re: [PATCH v5 17/44] KVM: x86/pmu: Snapshot host (i.e. perf's)
reported PMU capabilities
On 07-08-2025 01:26, Sean Christopherson wrote:
> Take a snapshot of the unadulterated PMU capabilities provided by perf so
> that KVM can compare guest vPMU capabilities against hardware capabilities
> when determining whether or not to intercept PMU MSRs (and RDPMC).
>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
> arch/x86/kvm/pmu.c | 15 ++++++++++-----
> 1 file changed, 10 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index 3206412a35a1..0f3e011824ed 100644
> --- a/arch/x86/kvm/pmu.c
> +++ b/arch/x86/kvm/pmu.c
> @@ -26,6 +26,10 @@
> /* This is enough to filter the vast majority of currently defined events. */
> #define KVM_PMU_EVENT_FILTER_MAX_EVENTS 300
>
> +/* Unadultered PMU capabilities of the host, i.e. of hardware. */
> +static struct x86_pmu_capability __read_mostly kvm_host_pmu;
> +
> +/* KVM's PMU capabilities, i.e. the intersection of KVM and hardware support. */
> struct x86_pmu_capability __read_mostly kvm_pmu_cap;
> EXPORT_SYMBOL_GPL(kvm_pmu_cap);
>
> @@ -104,6 +108,8 @@ void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops)
> bool is_intel = boot_cpu_data.x86_vendor == X86_VENDOR_INTEL;
> int min_nr_gp_ctrs = pmu_ops->MIN_NR_GP_COUNTERS;
>
> + perf_get_x86_pmu_capability(&kvm_host_pmu);
> +
> /*
> * Hybrid PMUs don't play nice with virtualization without careful
> * configuration by userspace, and KVM's APIs for reporting supported
> @@ -114,18 +120,16 @@ void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops)
> enable_pmu = false;
>
> if (enable_pmu) {
> - perf_get_x86_pmu_capability(&kvm_pmu_cap);
> -
> /*
> * WARN if perf did NOT disable hardware PMU if the number of
> * architecturally required GP counters aren't present, i.e. if
> * there are a non-zero number of counters, but fewer than what
> * is architecturally required.
> */
> - if (!kvm_pmu_cap.num_counters_gp ||
> - WARN_ON_ONCE(kvm_pmu_cap.num_counters_gp < min_nr_gp_ctrs))
> + if (!kvm_host_pmu.num_counters_gp ||
> + WARN_ON_ONCE(kvm_host_pmu.num_counters_gp < min_nr_gp_ctrs))
> enable_pmu = false;
> - else if (is_intel && !kvm_pmu_cap.version)
> + else if (is_intel && !kvm_host_pmu.version)
> enable_pmu = false;
> }
>
> @@ -134,6 +138,7 @@ void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops)
> return;
> }
>
> + memcpy(&kvm_pmu_cap, &kvm_host_pmu, sizeof(kvm_host_pmu));
> kvm_pmu_cap.version = min(kvm_pmu_cap.version, 2);
> kvm_pmu_cap.num_counters_gp = min(kvm_pmu_cap.num_counters_gp,
> pmu_ops->MAX_NR_GP_COUNTERS);
Reviewed-by: Sandipan Das <sandipan.das@....com>
Powered by blists - more mailing lists