lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 25 Aug 2022 11:26:19 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Stephane Eranian <eranian@...gle.com>
Cc:     "Liang, Kan" <kan.liang@...ux.intel.com>,
        linux-kernel@...r.kernel.org, kan.liang@...el.com,
        ak@...ux.intel.com, namhyung.kim@...nel.org, irogers@...gle.com
Subject: Re: [PATCH] perf/x86/intel/uncore: fix broken read_counter() for SNB
 IMC PMU

On Mon, Aug 15, 2022 at 03:28:36PM -0700, Stephane Eranian wrote:
> On Thu, Aug 4, 2022 at 6:09 AM Liang, Kan <kan.liang@...ux.intel.com> wrote:
> >
> >
> >
> > On 2022-08-03 12:00 p.m., Stephane Eranian wrote:
> > > Existing code was generating bogus counts for the SNB IMC bandwidth counters:
> > >
> > > $ perf stat -a -I 1000 -e uncore_imc/data_reads/,uncore_imc/data_writes/
> > >      1.000327813           1,024.03 MiB  uncore_imc/data_reads/
> > >      1.000327813              20.73 MiB  uncore_imc/data_writes/
> > >      2.000580153         261,120.00 MiB  uncore_imc/data_reads/
> > >      2.000580153              23.28 MiB  uncore_imc/data_writes/
> > >
> > > The problem was introduced by commit:
> > >   07ce734dd8ad ("perf/x86/intel/uncore: Clean up client IMC")
> > >
> > > Where the read_counter callback was replace to point to the generic
> > > uncore_mmio_read_counter() function.
> > >
> > > The SNB IMC counters are freerunnig 32-bit counters laid out contiguously in
> > > MMIO. But uncore_mmio_read_counter() is using a readq() call to read from
> > > MMIO therefore reading 64-bit from MMIO. Although this is okay for the
> > > uncore_perf_event_update() function because it is shifting the value based
> > > on the actual counter width to compute a delta, it is not okay for the
> > > uncore_pmu_event_start() which is simply reading the counter  and therefore
> > > priming the event->prev_count with a bogus value which is responsible for
> > > causing bogus deltas in the perf stat command above.
> > >
> > > The fix is to reintroduce the custom callback for read_counter for the SNB
> > > IMC PMU and use readl() instead of readq(). With the change the output of
> > > perf stat is back to normal:
> > > $ perf stat -a -I 1000 -e uncore_imc/data_reads/,uncore_imc/data_writes/
> > >      1.000120987             296.94 MiB  uncore_imc/data_reads/
> > >      1.000120987             138.42 MiB  uncore_imc/data_writes/
> > >      2.000403144             175.91 MiB  uncore_imc/data_reads/
> > >      2.000403144              68.50 MiB  uncore_imc/data_writes/
> > >
> > > Signed-off-by: Stephane Eranian <eranian@...gle.com>
> >
> > Reviewed-by: Kan Liang <kan.liang@...ux.intel.com>
> >
> Any further comments?

Got lost in the holiday pile-up, applied!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ