lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 6 May 2020 16:19:51 +0800
From:   "Jin, Yao" <yao.jin@...ux.intel.com>
To:     Jiri Olsa <jolsa@...hat.com>
Cc:     acme@...nel.org, jolsa@...nel.org, peterz@...radead.org,
        mingo@...hat.com, alexander.shishkin@...ux.intel.com,
        Linux-kernel@...r.kernel.org, ak@...ux.intel.com,
        kan.liang@...el.com, yao.jin@...el.com
Subject: Re: [PATCH v2 1/2] perf evsel: Create counts for collecting summary
 data

Hi Jiri,

On 5/5/2020 7:50 AM, Jiri Olsa wrote:
> On Sat, May 02, 2020 at 10:07:04AM +0800, Jin Yao wrote:
>> It would be useful to support the overall statistics for perf-stat
>> interval mode. For example, report the summary at the end of
>> "perf-stat -I" output.
>>
>> But since perf-stat can support many aggregation modes, such as
>> --per-thread, --per-socket, -M and etc, we need a solution which
>> doesn't bring much complexity.
>>
>> The idea is to create new 'evsel->summary_counts' which sums up the
>> counts delta per interval. Before reporting the summary, we copy the
>> data from evsel->summary_counts to evsel->counts, and next we just
>> follow current code.
>>
>>   v2:
>>   ---
>>   Rebase to perf/core branch
>>
>> Signed-off-by: Jin Yao <yao.jin@...ux.intel.com>
>> ---
>>   tools/perf/util/evsel.c | 10 ++++++++--
>>   tools/perf/util/evsel.h |  1 +
>>   tools/perf/util/stat.c  | 20 ++++++++++++++++++++
>>   3 files changed, 29 insertions(+), 2 deletions(-)
>>
>> diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
>> index a75bcb95bf23..abc503dd6eda 100644
>> --- a/tools/perf/util/evsel.c
>> +++ b/tools/perf/util/evsel.c
>> @@ -1280,22 +1280,28 @@ void evsel__delete(struct evsel *evsel)
>>   void evsel__compute_deltas(struct evsel *evsel, int cpu, int thread,
>>   			   struct perf_counts_values *count)
>>   {
>> -	struct perf_counts_values tmp;
>> +	struct perf_counts_values tmp, *summary;
>>   
>> -	if (!evsel->prev_raw_counts)
>> +	if (!evsel->prev_raw_counts || !evsel->summary_counts)
>>   		return;
>>   
>>   	if (cpu == -1) {
>>   		tmp = evsel->prev_raw_counts->aggr;
>>   		evsel->prev_raw_counts->aggr = *count;
>> +		summary = &evsel->summary_counts->aggr;
>>   	} else {
>>   		tmp = *perf_counts(evsel->prev_raw_counts, cpu, thread);
>>   		*perf_counts(evsel->prev_raw_counts, cpu, thread) = *count;
>> +		summary = perf_counts(evsel->summary_counts, cpu, thread);
> 
> shouldn't this be enough?
> 
> 		perf_counts(evsel->summary_counts, cpu, thread) = *count
> 
> without the code below.. and similar for aggr case
> however I still wonder if we should count this in
> perf_stat_process_counter and only for interval mode
>

Actually I have an easier way, which just resets the prev_raw counters. For example,

@@ -724,6 +727,12 @@ static int __run_perf_stat(int argc, const char **argv, int 
run_idx)

         update_stats(&walltime_nsecs_stats, t1 - t0);

+       if (interval) {
+               stat_config.interval = 0;
+               stat_config.summary = true;
+               perf_evlist__reset_prev_raw_counts(evsel_list);
+       }
+
         /*
          * Closing a group leader splits the group, and as we only disable
          * group leaders, results in remaining events becoming enabled. To

But if we just directly copy from current counts, the summary result looks a bit 
confused.

For example,

root@...-ppc:/# perf stat -e cycles -I1000 --interval-count 2
#           time             counts unit events
      1.000402302          2,943,521      cycles
      2.001333982          2,146,165      cycles

  Performance counter stats for 'system wide':

          5,880,031      cycles

        2.002805025 seconds time elapsed

2,943,521 + 2,146,165 != 5,880,031

That's because the counter is still enabled after interval printing. So at the 
time of printing summary, the counts are sightly increased. User may be confused 
for the summary result (why it's not equal to the sum of interval values?).

>>   	}
>>   
>>   	count->val = count->val - tmp.val;
>>   	count->ena = count->ena - tmp.ena;
>>   	count->run = count->run - tmp.run;
>> +
>> +	summary->val += count->val;
>> +	summary->ena += count->ena;
>> +	summary->run += count->run;
>>   }
>>   
>>   void perf_counts_values__scale(struct perf_counts_values *count,
>> diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
>> index 783246bf8d0d..430639c99d04 100644
>> --- a/tools/perf/util/evsel.h
>> +++ b/tools/perf/util/evsel.h
>> @@ -46,6 +46,7 @@ struct evsel {
>>   	char			*filter;
>>   	struct perf_counts	*counts;
>>   	struct perf_counts	*prev_raw_counts;
>> +	struct perf_counts	*summary_counts;
> 
> 'sum_counts' might be better
> 

That's OK. :)

Thanks
Jin Yao

> jirka
> 
>>   	int			idx;
>>   	unsigned long		max_events;
>>   	unsigned long		nr_events_printed;
>> diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
>> index 774468341851..c3fd008b4e84 100644
>> --- a/tools/perf/util/stat.c
> 
> SNIP
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ