lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d1fe801a-22d0-1f9b-b127-227b21635bd5@linux.intel.com>
Date:   Tue, 18 Apr 2023 09:03:30 -0400
From:   "Liang, Kan" <kan.liang@...ux.intel.com>
To:     Ian Rogers <irogers@...gle.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Mark Rutland <mark.rutland@....com>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Jiri Olsa <jolsa@...nel.org>,
        Namhyung Kim <namhyung@...nel.org>,
        Adrian Hunter <adrian.hunter@...el.com>,
        Florian Fischer <florian.fischer@...q.space>,
        linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] perf stat: Introduce skippable evsels



On 2023-04-17 2:13 p.m., Ian Rogers wrote:
> The json TopdownL1 is enabled if present unconditionally for perf stat
> default. Enabling it on Skylake has multiplexing as TopdownL1 on
> Skylake has multiplexing unrelated to this change - at least on the
> machine I was testing on. We can remove the metric group TopdownL1 on
> Skylake so that we don't enable it by default, there is still the
> group TmaL1. To me, disabling TopdownL1 seems less desirable than
> running with multiplexing - previously to get into topdown analysis
> there has to be knowledge that "perf stat -M TopdownL1" is the way to
> do this.

To be honest, I don't think it's a good idea to remove the TopdownL1. We
cannot remove it just because the new way cannot handle it. The perf
stat default works well until 6.3-rc7. It's a regression issue of the
current perf-tools-next.

But I'm OK to add some flags with the metrics to assist the perf tool to
specially handle the case if you prefer to modify the event list.

> 
> This doesn't relate to this change which is about making it so that
> failing to set up TopdownL1 doesn't cause an early exit. The reason I
> showed TigerLake output was that on TigerLake the skip/fallback
> approach works while Skylake just needs the events disabled/skipped
> unless it has sufficient permissions. Note the :u on the events in:

The perf_event_open() should be good to detect the insufficient
permission, but it doesn't work to detect an existing of an event.
That's because the kernel only checks the features not specific events.
It's not a reliable way to rely on the output of the perf_event_open() here.


>> From your test result in the v2 description, we can see that there is no
>> TopdownL1, which is good and expected. However, there is a (48.99%) with
>> cycles:u event, which implies multiplexing. Could you please check
>> what's the problem here?
>> Also, if it's because of the backgroud, all the events should be
>> multiplexing. But it looks like only cycle:u has multiplexing. Other
>> events, instructions:u, branches:u and branch-misses:u work without
>> multiplexing. That's very strange.
> I wasn't able to reproduce it and suspect it was a transient thing. I
> think there are multiplexing things to look into, 2 events on a fixed
> counter on Icelake+ will trigger multiplexing on all counters, and
> Skylake's 3 fixed and 4 generic should fit TopdownL1.

Just found a cascade lake. With this patch + the current
perf-tools-next, partial of the TopdownL1 and multiplexing can still be
observed.

$ sudo ./perf stat true

 Performance counter stats for 'true':

              2.91 msec task-clock                       #    0.316 CPUs
utilized
                 0      context-switches                 #    0.000 /sec
                 0      cpu-migrations                   #    0.000 /sec
                45      page-faults                      #   15.474 K/sec
         2,819,972      cycles                           #    0.970 GHz
                       (60.14%)
         5,391,406      instructions                     #    1.91  insn
per cycle
         1,068,575      branches                         #  367.442 M/sec
             8,455      branch-misses                    #    0.79% of
all branches
            70,283      CPU_CLK_UNHALTED.REF_XCLK        #   24.168
M/sec
            48,806      INT_MISC.RECOVERY_CYCLES_ANY     #   16.783
M/sec                       (39.86%)

       0.009204517 seconds time elapsed

       0.000000000 seconds user
       0.009614000 seconds sys


Thanks,
Kan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ