lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZUudoEyqtf5ZPtPp@google.com>
Date:   Wed, 8 Nov 2023 06:39:28 -0800
From:   Sean Christopherson <seanjc@...gle.com>
To:     Jim Mattson <jmattson@...gle.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        Kan Liang <kan.liang@...ux.intel.com>,
        Dapeng Mi <dapeng1.mi@...ux.intel.com>,
        Jinrong Liang <cloudliang@...cent.com>,
        Aaron Lewis <aaronlewis@...gle.com>,
        Like Xu <likexu@...cent.com>
Subject: Re: [PATCH v7 04/19] KVM: x86/pmu: Setup fixed counters' eventsel
 during PMU initialization

On Tue, Nov 07, 2023, Jim Mattson wrote:
> On Tue, Nov 7, 2023 at 4:31 PM Sean Christopherson <seanjc@...gle.com> wrote:
> >
> > Set the eventsel for all fixed counters during PMU initialization, the
> > eventsel is hardcoded and consumed if and only if the counter is supported,
> > i.e. there is no reason to redo the setup every time the PMU is refreshed.
> >
> > Configuring all KVM-supported fixed counter also eliminates a potential
> > pitfall if/when KVM supports discontiguous fixed counters, in which case
> > configuring only nr_arch_fixed_counters will be insufficient (ignoring the
> > fact that KVM will need many other changes to support discontiguous fixed
> > counters).
> >
> > Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> > ---
> >  arch/x86/kvm/vmx/pmu_intel.c | 14 ++++----------
> >  1 file changed, 4 insertions(+), 10 deletions(-)
> >
> > diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
> > index c4f2c6a268e7..5fc5a62af428 100644
> > --- a/arch/x86/kvm/vmx/pmu_intel.c
> > +++ b/arch/x86/kvm/vmx/pmu_intel.c
> > @@ -409,7 +409,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >   * Note, reference cycles is counted using a perf-defined "psuedo-encoding",
> >   * as there is no architectural general purpose encoding for reference cycles.
> >   */
> > -static void setup_fixed_pmc_eventsel(struct kvm_pmu *pmu)
> > +static u64 intel_get_fixed_pmc_eventsel(int index)
> >  {
> >         const struct {
> >                 u8 eventsel;
> > @@ -419,17 +419,11 @@ static void setup_fixed_pmc_eventsel(struct kvm_pmu *pmu)
> >                 [1] = { 0x3c, 0x00 }, /* CPU Cycles/ PERF_COUNT_HW_CPU_CYCLES. */
> >                 [2] = { 0x00, 0x03 }, /* Reference Cycles / PERF_COUNT_HW_REF_CPU_CYCLES*/
> >         };
> > -       int i;
> >
> >         BUILD_BUG_ON(ARRAY_SIZE(fixed_pmc_events) != KVM_PMC_MAX_FIXED);
> >
> > -       for (i = 0; i < pmu->nr_arch_fixed_counters; i++) {
> > -               int index = array_index_nospec(i, KVM_PMC_MAX_FIXED);
> > -               struct kvm_pmc *pmc = &pmu->fixed_counters[index];
> > -
> > -               pmc->eventsel = (fixed_pmc_events[index].unit_mask << 8) |
> > -                                fixed_pmc_events[index].eventsel;
> > -       }
> > +       return (fixed_pmc_events[index].unit_mask << 8) |
> > +               fixed_pmc_events[index].eventsel;
> 
> Can I just say that it's really confusing that the value returned by
> intel_get_fixed_pmc_eventsel() is the concatenation of an 8-bit "unit
> mask" and an 8-bit "eventsel"?

Heh, blame the SDM for having an "event select" field in "event select" MSRs.

Is this better?

	const struct {
		u8 event;
		u8 unit_mask;
	} fixed_pmc_events[] = {
		[0] = { 0xc0, 0x00 }, /* Instruction Retired / PERF_COUNT_HW_INSTRUCTIONS. */
		[1] = { 0x3c, 0x00 }, /* CPU Cycles/ PERF_COUNT_HW_CPU_CYCLES. */
		[2] = { 0x00, 0x03 }, /* Reference Cycles / PERF_COUNT_HW_REF_CPU_CYCLES*/
	};

	BUILD_BUG_ON(ARRAY_SIZE(fixed_pmc_events) != KVM_PMC_MAX_FIXED);

	return (fixed_pmc_events[index].unit_mask << 8) |
		fixed_pmc_events[index].event;


Or are you complaining about the fact that they're split at all?  I'm open to any
format, though I personally found the seperate umask and event values helpful
when trying to understand what's going on.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ