[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <8a80b138-77c1-682a-340e-f32d6779d446@linux.ibm.com>
Date: Fri, 14 Feb 2020 10:46:53 +0530
From: Ravi Bangoria <ravi.bangoria@...ux.ibm.com>
To: "Jin, Yao" <yao.jin@...ux.intel.com>
Cc: acme@...nel.org, jolsa@...nel.org, peterz@...radead.org,
mingo@...hat.com, alexander.shishkin@...ux.intel.com,
Linux-kernel@...r.kernel.org, ak@...ux.intel.com,
kan.liang@...el.com, yao.jin@...el.com,
Ravi Bangoria <ravi.bangoria@...ux.ibm.com>
Subject: Re: [PATCH v3] perf stat: Show percore counts in per CPU output
On 2/13/20 8:40 PM, Jin, Yao wrote:
>
>
> On 2/13/2020 9:20 PM, Ravi Bangoria wrote:
>> Hi Jin,
>>
>> On 2/13/20 12:45 PM, Jin Yao wrote:
>>> With this patch, for example,
>>>
>>> # perf stat -e cpu/event=cpu-cycles,percore/ -a -A --percore-show-thread -- sleep 1
>>>
>>> Performance counter stats for 'system wide':
>>>
>>> CPU0 2,453,061 cpu/event=cpu-cycles,percore/
>>> CPU1 1,823,921 cpu/event=cpu-cycles,percore/
>>> CPU2 1,383,166 cpu/event=cpu-cycles,percore/
>>> CPU3 1,102,652 cpu/event=cpu-cycles,percore/
>>> CPU4 2,453,061 cpu/event=cpu-cycles,percore/
>>> CPU5 1,823,921 cpu/event=cpu-cycles,percore/
>>> CPU6 1,383,166 cpu/event=cpu-cycles,percore/
>>> CPU7 1,102,652 cpu/event=cpu-cycles,percore/
>>>
>>> We can see counts are duplicated in CPU pairs
>>> (CPU0/CPU4, CPU1/CPU5, CPU2/CPU6, CPU3/CPU7).
>>>
>>
>> I was trying this patch and I am getting bit weird results when any cpu
>> is offline. Ex,
>>
>> $ lscpu | grep list
>> On-line CPU(s) list: 0-4,6,7
>> Off-line CPU(s) list: 5
>>
>> $ sudo ./perf stat -e cpu/event=cpu-cycles,percore/ -a -A --percore-show-thread -vv -- sleep 1
>> ...
>> cpu/event=cpu-cycles,percore/: 0: 23746491 1001189836 1001189836
>> cpu/event=cpu-cycles,percore/: 1: 19802666 1001291299 1001291299
>> cpu/event=cpu-cycles,percore/: 2: 24211983 1001394318 1001394318
>> cpu/event=cpu-cycles,percore/: 3: 54051396 1001516816 1001516816
>> cpu/event=cpu-cycles,percore/: 4: 6378825 1001064048 1001064048
>> cpu/event=cpu-cycles,percore/: 5: 21299840 1001166297 1001166297
>> cpu/event=cpu-cycles,percore/: 6: 13075410 1001274535 1001274535
>> Performance counter stats for 'system wide':
>> CPU0 30,125,316 cpu/event=cpu-cycles,percore/
>> CPU1 19,802,666 cpu/event=cpu-cycles,percore/
>> CPU2 45,511,823 cpu/event=cpu-cycles,percore/
>> CPU3 67,126,806 cpu/event=cpu-cycles,percore/
>> CPU4 30,125,316 cpu/event=cpu-cycles,percore/
>> CPU7 67,126,806 cpu/event=cpu-cycles,percore/
>> CPU0 30,125,316 cpu/event=cpu-cycles,percore/
>> 1.001918764 seconds time elapsed
>>
>> I see proper result without --percore-show-thread:
>>
>> $ sudo ./perf stat -e cpu/event=cpu-cycles,percore/ -a -A -vv -- sleep 1
>> ...
>> cpu/event=cpu-cycles,percore/: 0: 11676414 1001190709 1001190709
>> cpu/event=cpu-cycles,percore/: 1: 39119617 1001291459 1001291459
>> cpu/event=cpu-cycles,percore/: 2: 41821512 1001391158 1001391158
>> cpu/event=cpu-cycles,percore/: 3: 46853730 1001492799 1001492799
>> cpu/event=cpu-cycles,percore/: 4: 14448274 1001095948 1001095948
>> cpu/event=cpu-cycles,percore/: 5: 42238217 1001191187 1001191187
>> cpu/event=cpu-cycles,percore/: 6: 33129641 1001292072 1001292072
>> Performance counter stats for 'system wide':
>> S0-D0-C0 26,124,688 cpu/event=cpu-cycles,percore/
>> S0-D0-C1 39,119,617 cpu/event=cpu-cycles,percore/
>> S0-D0-C2 84,059,729 cpu/event=cpu-cycles,percore/
>> S0-D0-C3 79,983,371 cpu/event=cpu-cycles,percore/
>> 1.001961563 seconds time elapsed
>>
>> [...]
>>
>
> Thanks so much for reporting this issue!
>
> It looks I should use the cpu idx in print_percore_thread. I can't use the cpu value. I have a fix:
>
> diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
> index 7eb3643a97ae..d89cb0da90f8 100644
> --- a/tools/perf/util/stat-display.c
> +++ b/tools/perf/util/stat-display.c
> @@ -1149,13 +1149,11 @@ static void print_footer(struct perf_stat_config *config)
> static void print_percore_thread(struct perf_stat_config *config,
> struct evsel *counter, char *prefix)
> {
> - int cpu, s, s2, id;
> + int s, s2, id;
> bool first = true;
>
> for (int i = 0; i < perf_evsel__nr_cpus(counter); i++) {
> - cpu = perf_cpu_map__cpu(evsel__cpus(counter), i);
> - s2 = config->aggr_get_id(config, evsel__cpus(counter), cpu);
> -
> + s2 = config->aggr_get_id(config, evsel__cpus(counter), i);
> for (s = 0; s < config->aggr_map->nr; s++) {
> id = config->aggr_map->map[s];
> if (s2 == id)
> @@ -1164,7 +1162,7 @@ static void print_percore_thread(struct perf_stat_config *config,
>
> print_counter_aggrdata(config, counter, s,
> prefix, false,
> - &first, cpu);
> + &first, i);
> }
> }
LGTM.
Tested-by: Ravi Bangoria <ravi.bangoria@...ux.ibm.com>
Ravi
Powered by blists - more mailing lists