lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d79a1bbe-bca5-0420-0480-1d508d2a038c@linux.intel.com>
Date:   Mon, 17 Feb 2020 09:22:57 +0800
From:   "Jin, Yao" <yao.jin@...ux.intel.com>
To:     Jiri Olsa <jolsa@...hat.com>
Cc:     acme@...nel.org, jolsa@...nel.org, peterz@...radead.org,
        mingo@...hat.com, alexander.shishkin@...ux.intel.com,
        Linux-kernel@...r.kernel.org, ak@...ux.intel.com,
        kan.liang@...el.com, yao.jin@...el.com
Subject: Re: [PATCH v4] perf stat: Show percore counts in per CPU output



On 2/17/2020 6:54 AM, Jiri Olsa wrote:
> On Fri, Feb 14, 2020 at 04:04:52PM +0800, Jin Yao wrote:
> 
> SNIP
> 
>>   CPU1               1,009,312      cpu/event=cpu-cycles,percore/
>>   CPU2               2,784,072      cpu/event=cpu-cycles,percore/
>>   CPU3               2,427,922      cpu/event=cpu-cycles,percore/
>>   CPU4               2,752,148      cpu/event=cpu-cycles,percore/
>>   CPU6               2,784,072      cpu/event=cpu-cycles,percore/
>>   CPU7               2,427,922      cpu/event=cpu-cycles,percore/
>>
>>          1.001416041 seconds time elapsed
>>
>>   v4:
>>   ---
>>   Ravi Bangoria reports an issue in v3. Once we offline a CPU,
>>   the output is not correct. The issue is we should use the cpu
>>   idx in print_percore_thread rather than using the cpu value.
> 
> Acked-by: Jiri Olsa <jolsa@...nel.org>
> 

Thanks so much for ACK this patch. :)

> btw, there's slight misalignment in -I output, but not due
> to your change, it's there for some time now, and probably
> in other agregation  outputs as well:
> 
> 
>    $ sudo ./perf stat -e cpu/event=cpu-cycles/ -a -A  -I 1000
>    #           time CPU                    counts unit events
>         1.000224464 CPU0               7,251,151      cpu/event=cpu-cycles/
>         1.000224464 CPU1              21,614,946      cpu/event=cpu-cycles/
>         1.000224464 CPU2              30,812,097      cpu/event=cpu-cycles/
> 
> should be (extra space after CPUX):
> 
>         1.000224464 CPU2               30,812,097      cpu/event=cpu-cycles/
> 
> I'll put it on my TODO, but if you're welcome to check on it ;-)
> 
> thanks,
> jirka
> 

I have a simple fix for this misalignment issue.

diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
index bc31fccc0057..95b29c9cba36 100644
--- a/tools/perf/util/stat-display.c
+++ b/tools/perf/util/stat-display.c
@@ -114,11 +114,11 @@ static void aggr_printout(struct perf_stat_config 
*config,
                         fprintf(config->output, "S%d-D%d-C%*d%s",
                                 cpu_map__id_to_socket(id),
                                 cpu_map__id_to_die(id),
-                               config->csv_output ? 0 : -5,
+                               config->csv_output ? 0 : -3,
                                 cpu_map__id_to_cpu(id), config->csv_sep);
                 } else {
-                       fprintf(config->output, "CPU%*d%s ",
-                               config->csv_output ? 0 : -5,
+                       fprintf(config->output, "CPU%*d%s",
+                               config->csv_output ? 0 : -7,
                                 evsel__cpus(evsel)->map[id],
                                 config->csv_sep);
                 }

Following command lines are tested OK.

perf stat -e cpu/event=cpu-cycles/ -I 1000
perf stat -e cpu/event=cpu-cycles/ -a -I 1000
perf stat -e cpu/event=cpu-cycles/ -a -A -I 1000
perf stat -e cpu/event=cpu-cycles,percore/ -a -A -I 1000
perf stat -e cpu/event=cpu-cycles,percore/ -a -A --percore-show-thread 
-I 1000

Could you help to look at that?

Thanks
Jin Yao

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ