lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALMp9eSryGLaHfH0fWeQco1rTY57q=pskB5H50u2z4nxBuPqYA@mail.gmail.com>
Date: Wed, 28 Jan 2026 15:43:17 -0800
From: Jim Mattson <jmattson@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, Thomas Gleixner <tglx@...utronix.de>, 
	Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, 
	Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org, 
	"H. Peter Anvin" <hpa@...or.com>, Peter Zijlstra <peterz@...radead.org>, 
	Arnaldo Carvalho de Melo <acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>, 
	Mark Rutland <mark.rutland@....com>, 
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>, 
	Ian Rogers <irogers@...gle.com>, Adrian Hunter <adrian.hunter@...el.com>, 
	James Clark <james.clark@...aro.org>, Shuah Khan <shuah@...nel.org>, kvm@...r.kernel.org, 
	linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org, 
	linux-kselftest@...r.kernel.org
Subject: Re: [PATCH 4/6] KVM: x86/pmu: [De]activate HG_ONLY PMCs at SVME
 changes and nested transitions

On Thu, Jan 22, 2026 at 8:55 AM Sean Christopherson <seanjc@...gle.com> wrote:
>
> On Wed, Jan 21, 2026, Jim Mattson wrote:
> > diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h
> > index f0aa6996811f..7b32796213a0 100644
> > --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h
> > +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h
> > @@ -26,6 +26,7 @@ KVM_X86_PMU_OP_OPTIONAL(cleanup)
> >  KVM_X86_PMU_OP_OPTIONAL(write_global_ctrl)
> >  KVM_X86_PMU_OP(mediated_load)
> >  KVM_X86_PMU_OP(mediated_put)
> > +KVM_X86_PMU_OP_OPTIONAL(set_pmc_eventsel_hw_enable)
> >
> >  #undef KVM_X86_PMU_OP
> >  #undef KVM_X86_PMU_OP_OPTIONAL
> > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> > index 833ee2ecd43f..1541c201285b 100644
> > --- a/arch/x86/kvm/pmu.c
> > +++ b/arch/x86/kvm/pmu.c
> > @@ -1142,6 +1142,13 @@ void kvm_pmu_branch_retired(struct kvm_vcpu *vcpu)
> >  }
> >  EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_branch_retired);
> >
> > +void kvm_pmu_set_pmc_eventsel_hw_enable(struct kvm_vcpu *vcpu,
> > +                                    unsigned long *bitmap, bool enable)
> > +{
> > +     kvm_pmu_call(set_pmc_eventsel_hw_enable)(vcpu, bitmap, enable);
> > +}
> > +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_set_pmc_eventsel_hw_enable);
>
> Why bounce through a PMU op just to go from nested.c to pmu.c?  AFAICT, common
> x86 code never calls kvm_pmu_set_pmc_eventsel_hw_enable(), just wire up calls
> directly to amd_pmu_refresh_host_guest_eventsels().

It seemed that pmu.c deliberately didn't export anything. All accesses
were via virtual function table. But maybe that was just happenstance.
Should I create a separate pmu.h, or just throw the prototype into
svm.h?

> > @@ -1054,6 +1055,11 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
> >       if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true))
> >               goto out_exit_err;
> >
> > +     kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
> > +             vcpu_to_pmu(vcpu)->pmc_hostonly, false);
> > +     kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
> > +             vcpu_to_pmu(vcpu)->pmc_guestonly, true);
> > +
> >       if (nested_svm_merge_msrpm(vcpu))
> >               goto out;
> >
> > @@ -1137,6 +1143,10 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
> >
> >       /* Exit Guest-Mode */
> >       leave_guest_mode(vcpu);
> > +     kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
> > +             vcpu_to_pmu(vcpu)->pmc_hostonly, true);
> > +     kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
> > +             vcpu_to_pmu(vcpu)->pmc_guestonly, false);
> >       svm->nested.vmcb12_gpa = 0;
> >       WARN_ON_ONCE(svm->nested.nested_run_pending);
>
> I don't think these are the right places to hook.  Shouldn't KVM update the
> event selectors on _all_ transitions, whether they're architectural or not?  E.g.
> by wrapping {enter,leave}_guest_mode()?

You are so right! I will fix this in the next version.

> static void svm_enter_guest_mode(struct kvm_vcpu *vcpu)
> {
>         enter_guest_mode(vcpu);
>         amd_pmu_refresh_host_guest_eventsels(vcpu);
> }
>
> static void svm_leave_guest_mode(struct kvm_vcpu *vcpu)
> {
>         leave_guest_mode(vcpu);
>         amd_pmu_refresh_host_guest_eventsels(vcpu);
> }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ