lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAL_JsqK5TT1usMUY1Eaxy6qyGoWLj5R8XRNG-L6h-1S3WQfkRg@mail.gmail.com>
Date: Mon, 10 Jun 2024 10:42:55 -0600
From: Rob Herring <robh@...nel.org>
To: Mark Rutland <mark.rutland@....com>
Cc: Russell King <linux@...linux.org.uk>, Peter Zijlstra <peterz@...radead.org>, 
	Ingo Molnar <mingo@...hat.com>, Arnaldo Carvalho de Melo <acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>, 
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>, 
	Ian Rogers <irogers@...gle.com>, Adrian Hunter <adrian.hunter@...el.com>, 
	Will Deacon <will@...nel.org>, Marc Zyngier <maz@...nel.org>, Oliver Upton <oliver.upton@...ux.dev>, 
	James Morse <james.morse@....com>, Suzuki K Poulose <suzuki.poulose@....com>, 
	Zenghui Yu <yuzenghui@...wei.com>, Catalin Marinas <catalin.marinas@....com>, 
	linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org, 
	linux-perf-users@...r.kernel.org, kvmarm@...ts.linux.dev
Subject: Re: [PATCH 3/9] perf: arm_pmu: Remove event index to counter remapping

On Mon, Jun 10, 2024 at 4:44 AM Mark Rutland <mark.rutland@....com> wrote:
>
> On Fri, Jun 07, 2024 at 02:31:28PM -0600, Rob Herring (Arm) wrote:
> > Xscale and Armv6 PMUs defined the cycle counter at 0 and event counters
> > starting at 1 and had 1:1 event index to counter numbering. On Armv7 and
> > later, this changed the cycle counter to 31 and event counters start at
> > 0. The drivers for Armv7 and PMUv3 kept the old event index numbering
> > and introduced an event index to counter conversion. The conversion uses
> > masking to convert from event index to a counter number. This operation
> > relies on having at most 32 counters so that the cycle counter index 0
> > can be transformed to counter number 31.

[...]

> > @@ -783,7 +767,7 @@ static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu)
> >       struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events);
> >
> >       /* Clear any unused counters to avoid leaking their contents */
> > -     for_each_clear_bit(i, cpuc->used_mask, cpu_pmu->num_events) {
> > +     for_each_clear_bit(i, cpuc->used_mask, ARMPMU_MAX_HWEVENTS) {
> >               if (i == ARMV8_IDX_CYCLE_COUNTER)
> >                       write_pmccntr(0);
> >               else
>
> IIUC this will now hit all unimplemented counters; e.g. for N counters the body
> will run for counters N..31, and the else case has:
>
>         armv8pmu_write_evcntr(i, 0);
>
> ... where the resulting write to PMEVCNTR<n>_EL0 for unimplemented
> counters is CONSTRAINED UNPREDICTABLE and might be UNDEFINED.
>
> We can fix that with for_each_andnot_bit(), e.g.

Good catch. Fixed.

>
>         for_each_andnot_bit(i, cpu_pmu->cntr_mask, cpuc->used_mask,
>                             ARMPMU_MAX_HWEVENTS) {
>                 if (i == ARMV8_IDX_CYCLE_COUNTER)
>                         write_pmccntr(0);
>                 else
>                          armv8pmu_write_evcntr(i, 0);
>         }
>
> [...]
>
> > @@ -905,7 +889,7 @@ static int armv8pmu_get_single_idx(struct pmu_hw_events *cpuc,
> >  {
> >       int idx;
> >
> > -     for (idx = ARMV8_IDX_COUNTER0; idx < cpu_pmu->num_events; idx++) {
> > +     for_each_set_bit(idx, cpu_pmu->cntr_mask, 31) {
> >               if (!test_and_set_bit(idx, cpuc->used_mask))
> >                       return idx;
> >       }
> > @@ -921,7 +905,9 @@ static int armv8pmu_get_chain_idx(struct pmu_hw_events *cpuc,
> >        * Chaining requires two consecutive event counters, where
> >        * the lower idx must be even.
> >        */
> > -     for (idx = ARMV8_IDX_COUNTER0 + 1; idx < cpu_pmu->num_events; idx += 2) {
> > +     for_each_set_bit(idx, cpu_pmu->cntr_mask, 31) {
> > +             if (!(idx & 0x1))
> > +                     continue;
> >               if (!test_and_set_bit(idx, cpuc->used_mask)) {
> >                       /* Check if the preceding even counter is available */
> >                       if (!test_and_set_bit(idx - 1, cpuc->used_mask))
>
> It would be nice to replace those instances of '31' with something
> indicating that this was only covering the generic/programmable
> counters, but I wasn't able to come up with a nice mnemonic for that.
> The best I could think of was:
>
> #define ARMV8_MAX_NR_GENERIC_COUNTERS 31
>
> Maybe it makes sense to define that along with ARMV8_IDX_CYCLE_COUNTER.

I've got nothing better. :) I think there's a few other spots that can use this.

[...]

> >       /* Read the nb of CNTx counters supported from PMNC */
> > -     *nb_cnt = (armv7_pmnc_read() >> ARMV7_PMNC_N_SHIFT) & ARMV7_PMNC_N_MASK;
> > +     nb_cnt = (armv7_pmnc_read() >> ARMV7_PMNC_N_SHIFT) & ARMV7_PMNC_N_MASK;
> > +     bitmap_set(cpu_pmu->cntr_mask, 0, nb_cnt);
> >
> >       /* Add the CPU cycles counter */
> > -     *nb_cnt += 1;
> > +     bitmap_set(cpu_pmu->cntr_mask, ARMV7_IDX_CYCLE_COUNTER, 1);
>
> This can be:
>
>         set_bit(cpu_pmu->cntr_mask, ARMV7_IDX_CYCLE_COUNTER);
>
> ... and likewise for the PMUv3 version.

Indeed. The documentation in bitmap.h is not clear that greater than 1
unsigned long # of bits works given it says there set_bit() is just
"*addr |= bit". I guess I don't use bitops enough...

Rob

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ