lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 22 Mar 2022 09:13:42 +0200
From:   Adrian Hunter <adrian.hunter@...el.com>
To:     "Steinar H. Gunderson" <sesse@...gle.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Cc:     Jiri Olsa <jolsa@...hat.com>, Namhyung Kim <namhyung@...nel.org>,
        linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] perf intel-pt: Synthesize cycle events

On 21/03/2022 20:14, Steinar H. Gunderson wrote:
> There is no good reason why we cannot synthesize "cycle" events
> from Intel PT just as we can synthesize "instruction" events,
> in particular when CYC packets are available. This enables using
> PT to getting much more accurate cycle profiles than regular sampling
> (record -e cycles) when the work last for very short periods (<10 ms).
> Thus, add support for this, based off of the existing IPC calculation
> framework. The new option to --itrace is "y" (for cYcles), as c was
> taken for calls. Cycle and instruction events can be synthesized
> together, and are by default.
> 
> The only real caveat is that CYC packets are only emitted whenever
> some other packet is, which in practice is when a branch instruction
> is encountered (and not even all branches). Thus, even at no subsampling
> (e.g. --itrace=y0ns), it is impossible to get more accuracy than
> a single basic block, and all cycles spent executing that block
> will get attributed to the branch instruction that ends the packet.
> Thus, one cannot know whether the cycles came from e.g. a specific load,
> a mispredicted branch, or something else. When subsampling (which
> is the default), the cycle events will get smeared out even more,
> but will still be generally useful to attribute cycle counts to functions.
> 
> Signed-off-by: Steinar H. Gunderson <sesse@...gle.com>
> Reviewed-by: Adrian Hunter <adrian.hunter@...el.com>
> ---
>  tools/perf/Documentation/itrace.txt        |  3 +-
>  tools/perf/Documentation/perf-intel-pt.txt | 36 ++++++++----
>  tools/perf/util/auxtrace.c                 |  9 ++-
>  tools/perf/util/auxtrace.h                 |  7 ++-
>  tools/perf/util/intel-pt.c                 | 68 ++++++++++++++++++++--
>  5 files changed, 102 insertions(+), 21 deletions(-)

Thanks for sending the patch.  There is one variable leftover from my
earlier suggestions - see below.

> diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
> index e8613cbda331..364fdfe59828 100644
> --- a/tools/perf/util/intel-pt.c
> +++ b/tools/perf/util/intel-pt.c
> @@ -5,6 +5,7 @@
>   */
>  
>  #include <inttypes.h>
> +#include <linux/perf_event.h>
>  #include <stdio.h>
>  #include <stdbool.h>
>  #include <errno.h>
> @@ -89,6 +90,10 @@ struct intel_pt {
>  	u64 instructions_sample_type;
>  	u64 instructions_id;
>  
> +	bool sample_cycles;
> +	u64 cycles_sample_type;
> +	u64 cycles_id;
> +
>  	bool sample_branches;
>  	u32 branches_filter;
>  	u64 branches_sample_type;
> @@ -191,10 +196,13 @@ struct intel_pt_queue {
>  	u32 flags;
>  	u16 insn_len;
>  	u64 last_insn_cnt;
> +	u64 last_cyc_cnt;

last_cyc_cnt is an unused

>  	u64 ipc_insn_cnt;
>  	u64 ipc_cyc_cnt;
>  	u64 last_in_insn_cnt;
>  	u64 last_in_cyc_cnt;
> +	u64 last_cy_insn_cnt;
> +	u64 last_cy_cyc_cnt;
>  	u64 last_br_insn_cnt;
>  	u64 last_br_cyc_cnt;
>  	unsigned int cbr_seen;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ