[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aXvgfM_rPNmmXDwn@google.com>
Date: Thu, 29 Jan 2026 14:34:36 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Jim Mattson <jmattson@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, Peter Zijlstra <peterz@...radead.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>,
Ian Rogers <irogers@...gle.com>, Adrian Hunter <adrian.hunter@...el.com>,
James Clark <james.clark@...aro.org>, Shuah Khan <shuah@...nel.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org,
linux-kselftest@...r.kernel.org
Subject: Re: [PATCH 4/6] KVM: x86/pmu: [De]activate HG_ONLY PMCs at SVME
changes and nested transitions
On Wed, Jan 28, 2026, Jim Mattson wrote:
> On Thu, Jan 22, 2026 at 8:55 AM Sean Christopherson <seanjc@...gle.com> wrote:
> >
> > On Wed, Jan 21, 2026, Jim Mattson wrote:
> > > diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h
> > > index f0aa6996811f..7b32796213a0 100644
> > > --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h
> > > +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h
> > > @@ -26,6 +26,7 @@ KVM_X86_PMU_OP_OPTIONAL(cleanup)
> > > KVM_X86_PMU_OP_OPTIONAL(write_global_ctrl)
> > > KVM_X86_PMU_OP(mediated_load)
> > > KVM_X86_PMU_OP(mediated_put)
> > > +KVM_X86_PMU_OP_OPTIONAL(set_pmc_eventsel_hw_enable)
> > >
> > > #undef KVM_X86_PMU_OP
> > > #undef KVM_X86_PMU_OP_OPTIONAL
> > > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> > > index 833ee2ecd43f..1541c201285b 100644
> > > --- a/arch/x86/kvm/pmu.c
> > > +++ b/arch/x86/kvm/pmu.c
> > > @@ -1142,6 +1142,13 @@ void kvm_pmu_branch_retired(struct kvm_vcpu *vcpu)
> > > }
> > > EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_branch_retired);
> > >
> > > +void kvm_pmu_set_pmc_eventsel_hw_enable(struct kvm_vcpu *vcpu,
> > > + unsigned long *bitmap, bool enable)
> > > +{
> > > + kvm_pmu_call(set_pmc_eventsel_hw_enable)(vcpu, bitmap, enable);
> > > +}
> > > +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_set_pmc_eventsel_hw_enable);
> >
> > Why bounce through a PMU op just to go from nested.c to pmu.c? AFAICT, common
> > x86 code never calls kvm_pmu_set_pmc_eventsel_hw_enable(), just wire up calls
> > directly to amd_pmu_refresh_host_guest_eventsels().
>
> It seemed that pmu.c deliberately didn't export anything. All accesses
> were via virtual function table. But maybe that was just happenstance.
Probably just happenstance?
> Should I create a separate pmu.h, or just throw the prototype into
> svm.h?
I say just throw it in svm.h. We've had pmu_intel.h for a long time, and there's
hardly anything in there. And somewhat surprisingly, only two things in vmx.h
that obviously could go in pmu_intel.h.
void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu);
int intel_pmu_create_guest_lbr_event(struct kvm_vcpu *vcpu);
Powered by blists - more mailing lists