lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ3xEMiOtDe5OeC8oT2NyVu5BEmH_eLgAAH4voLqejWgsvG4xQ@mail.gmail.com>
Date:   Thu, 15 Oct 2020 17:53:40 +0300
From:   Or Gerlitz <gerlitz.or@...il.com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Brendan Gregg <bgregg@...flix.com>
Cc:     Linux Netdev List <netdev@...r.kernel.org>
Subject: perf measure for stalled cycles per instruction on newer Intel processors

Hi,

Earlier Intel processors (e.g E5-2650) support the more of classical
two stall events (for backend and frontend [1]) and then perf shows
the nice measure of stalled cycles per instruction - e.g here where we
have IPC of 0.91 and CSPI (see [2]) of 0.68:

 9,568,273,970      cycles                    #    2.679 GHz
           (53.30%)
     5,979,155,843      stalled-cycles-frontend   #   62.49% frontend
cycles idle     (53.31%)
     4,874,774,413      stalled-cycles-backend    #   50.95% backend
cycles idle      (53.31%)
     8,732,767,750      instructions              #    0.91  insn per
cycle
                                                  #    0.68  stalled
cycles per insn  (59.97%)

Running over a system with newer processor (6254) I noted that there
are sort of zillion (..) stall events [2] and perf -e $EVENT for them
does show thier count.

However perf stat doesn't show any more the "stalled cycles per insn"
computation.

Looking in the perf sources, it seems we do that only if the
backend/frontend events exist (perf_stat__print_shadow_stats function)
- am I correct in my reading of the code?

If it's the case, what's needed here to get this or similar measure back?

If it's not the case, if you can suggest how to get perf to emit this
quantity there.

Thanks,

Or.

[1] perf list | grep stalled-cycles

stalled-cycles-backend OR idle-cycles-backend      [Hardware event]
stalled-cycles-frontend OR idle-cycles-frontend    [Hardware event]

[2] http://www.brendangregg.com/perf.html#CPUstatistics

[3] perf list | grep stall -A 1 (manipulated, there are more..)

cycle_activity.stalls_l3_miss
       [Execution stalls while L3 cache miss demand load is outstanding]
  cycle_activity.stalls_l1d_miss
       [Execution stalls while L1 cache miss demand load is outstanding]
  cycle_activity.stalls_l2_miss
       [Execution stalls while L2 cache miss demand load is outstanding]
  cycle_activity.stalls_mem_any
       [Execution stalls while memory subsystem has an outstanding load]
  cycle_activity.stalls_total
       [Total execution stalls]
  ild_stall.lcp
       [Core cycles the allocator was stalled due to recovery from earlier
  partial_rat_stalls.scoreboard
       [Cycles where the pipeline is stalled due to serializing operations]
  resource_stalls.any
       [Resource-related stall cycles]
  resource_stalls.sb
       [Cycles stalled due to no store buffers available. (not including
  partial_rat_stalls.scoreboard
       [Cycles where the pipeline is stalled due to serializing operations]
  resource_stalls.any
       [Resource-related stall cycles]
  resource_stalls.sb
       [Cycles stalled due to no store buffers available. (not including
        draining form sync)]
  uops_executed.stall_cycles
       [Counts number of cycles no uops were dispatched to be executed on this
  uops_issued.stall_cycles
       [Cycles when Resource Allocation Table (RAT) does not issue Uops to
  uops_retired.stall_cycles
       [Cycles without actually retired uops]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ