lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAM9d7ci5owtM9h_PjsLo6hNz=kZKDT8PcPFOWX41Vf9g+SnpEQ@mail.gmail.com>
Date:   Tue, 11 Jul 2023 10:34:18 -0700
From:   Namhyung Kim <namhyung@...nel.org>
To:     Arnaldo Carvalho de Melo <acme@...nel.org>
Cc:     Ian Rogers <irogers@...gle.com>,
        Sandipan Das <sandipan.das@....com>,
        linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org,
        peterz@...radead.org, mingo@...hat.com, mark.rutland@....com,
        alexander.shishkin@...ux.intel.com, jolsa@...nel.org,
        adrian.hunter@...el.com, ayush.jain3@....com,
        ananth.narayan@....com, ravi.bangoria@....com,
        santosh.shukla@....com
Subject: Re: [PATCH v2] perf vendor events amd: Fix large metrics

On Tue, Jul 11, 2023 at 7:51 AM Arnaldo Carvalho de Melo
<acme@...nel.org> wrote:
>
> Em Thu, Jul 06, 2023 at 06:49:29AM -0700, Ian Rogers escreveu:
> > On Wed, Jul 5, 2023 at 11:34 PM Sandipan Das <sandipan.das@....com> wrote:
> > >
> > > There are cases where a metric requires more events than the number of
> > > available counters. E.g. AMD Zen, Zen 2 and Zen 3 processors have four
> > > data fabric counters but the "nps1_die_to_dram" metric has eight events.
> > > By default, the constituent events are placed in a group and since the
> > > events cannot be scheduled at the same time, the metric is not computed.
> > > The "all metrics" test also fails because of this.
> > >
> > > Use the NO_GROUP_EVENTS constraint for such metrics which anyway expect
> > > the user to run perf with "--metric-no-group".
> > >
> > > E.g.
> > >
> > >   $ sudo perf test -v 101
> > >
> > > Before:
> > >
> > >   101: perf all metrics test                                           :
> > >   --- start ---
> > >   test child forked, pid 37131
> > >   Testing branch_misprediction_ratio
> > >   Testing all_remote_links_outbound
> > >   Testing nps1_die_to_dram
> > >   Metric 'nps1_die_to_dram' not printed in:
> > >   Error:
> > >   Invalid event (dram_channel_data_controller_4) in per-thread mode, enable system wide with '-a'.
> > >   Testing macro_ops_dispatched
> > >   Testing all_l2_cache_accesses
> > >   Testing all_l2_cache_hits
> > >   Testing all_l2_cache_misses
> > >   Testing ic_fetch_miss_ratio
> > >   Testing l2_cache_accesses_from_l2_hwpf
> > >   Testing l2_cache_misses_from_l2_hwpf
> > >   Testing op_cache_fetch_miss_ratio
> > >   Testing l3_read_miss_latency
> > >   Testing l1_itlb_misses
> > >   test child finished with -1
> > >   ---- end ----
> > >   perf all metrics test: FAILED!
> > >
> > > After:
> > >
> > >   101: perf all metrics test                                           :
> > >   --- start ---
> > >   test child forked, pid 43766
> > >   Testing branch_misprediction_ratio
> > >   Testing all_remote_links_outbound
> > >   Testing nps1_die_to_dram
> > >   Testing macro_ops_dispatched
> > >   Testing all_l2_cache_accesses
> > >   Testing all_l2_cache_hits
> > >   Testing all_l2_cache_misses
> > >   Testing ic_fetch_miss_ratio
> > >   Testing l2_cache_accesses_from_l2_hwpf
> > >   Testing l2_cache_misses_from_l2_hwpf
> > >   Testing op_cache_fetch_miss_ratio
> > >   Testing l3_read_miss_latency
> > >   Testing l1_itlb_misses
> > >   test child finished with 0
> > >   ---- end ----
> > >   perf all metrics test: Ok
> > >
> > > Reported-by: Ayush Jain <ayush.jain3@....com>
> > > Suggested-by: Ian Rogers <irogers@...gle.com>
> > > Signed-off-by: Sandipan Das <sandipan.das@....com>
> >
> > Acked-by: Ian Rogers <irogers@...gle.com>
>
> Thanks, applied.

If I'm not too late..

Tested-by: Namhyung Kim <namhyung@...nel.org>

Thanks,
Namhyung

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ