lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1343120493-23059-1-git-send-email-namhyung@kernel.org>
Date:	Tue, 24 Jul 2012 18:01:21 +0900
From:	Namhyung Kim <namhyung@...nel.org>
To:	Arnaldo Carvalho de Melo <acme@...stprotocols.net>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Paul Mackerras <paulus@...ba.org>,
	Ingo Molnar <mingo@...nel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Stephane Eranian <eranian@...gle.com>,
	Jiri Olsa <jolsa@...hat.com>,
	Ulrich Drepper <drepper@...il.com>,
	Andi Kleen <andi@...stfloor.org>
Subject: [RFC/PATCHSET 00/12] perf report: Add support to event group viewing (v1)

Hi all,

This is a patchset to support event grouping on perf report.

It depends on other patches like refactoring hist print [1],
processing file header feature [2] and (obviously) Jiri's event group
management [3]. All of this need to be reviewed though. ;)

The basic idea is move group member's hist entries to a leader, and
sort/collapse them on the leader's tree. The leader will have all of
group members' stat in it. The output is sorted by the leader's period
and in turn first child and so on.

To use it, 'perf record' should group events when recording. And then
perf report parses the saved command line and reconstruct the group
relation. Currently only the '-e { event1,event2 }' syntax is supported
(i.e. --group option is *NOT* supported) to make things easy. But it'd
not be that hard to support --group also.

But I think re-using event parsing routine (at least, in its current
form) has some problems especially if perf report will not run on the
same machine that runs perf record. I cannot find a better way than
extending/changing the perf file format to let perf record know about
the group relationship. Any thought?

Here is an example:

  $ ./perf record -e cycles:u -e '{cache-references,cache-misses}:u' noploop 1

Without --group:

  $ ./perf report --stdio
  ...
  # Samples: 4K of event 'cycles:u'
  # Event count (approx.): 3692183905
  #
  # Overhead  Command       Shared Object                   Symbol
  # ........  .......  ..................  .......................
  #
      99.98%  noploop  noploop             [.] main               
       0.01%  noploop  ld-2.15.so          [.] _dl_relocate_object
       0.00%  noploop  [kernel.kallsyms]   [k] page_fault         
       0.00%  noploop  libc-2.15.so        [.] __execvpe          
       0.00%  noploop  libpthread-2.15.so  [.] __read_nocancel    
       0.00%  noploop  ld-2.15.so          [.] _start             
  
  
  # Samples: 26  of event 'cache-references:u'
  # Event count (approx.): 4229
  #
  # Overhead  Command       Shared Object                              Symbol
  # ........  .......  ..................  ..................................
  #
      55.85%  noploop  ld-2.15.so          [.] do_lookup_x                   
      19.18%  noploop  ld-2.15.so          [.] strlen                        
      14.09%  noploop  libc-2.15.so        [.] getenv                        
       3.17%  noploop  ld-2.15.so          [.] _dl_fini                      
       2.65%  noploop  noploop             [.] main                          
       1.44%  noploop  perf                [.] perf_evlist__prepare_workload 
       1.44%  noploop  [kernel.kallsyms]   [k] page_fault                    
       0.99%  noploop  [kernel.kallsyms]   [k] apic_timer_interrupt          
       0.33%  noploop  noploop             [.] handler                       
       0.33%  noploop  libc-2.15.so        [.] __run_exit_handlers           
       0.33%  noploop  [kernel.kallsyms]   [k] call_function_single_interrupt
       0.12%  noploop  ld-2.15.so          [.] _dl_start                     
       0.05%  noploop  libpthread-2.15.so  [.] __read_nocancel               
       0.02%  noploop  ld-2.15.so          [.] _start                        
  
  
  # Samples: 23  of event 'cache-misses:u'
  # Event count (approx.): 4312
  #
  # Overhead  Command       Shared Object                                Symbol
  # ........  .......  ..................  ....................................
  #
      46.94%  noploop  ld-2.15.so          [.] do_lookup_x                     
      21.78%  noploop  libc-2.15.so        [.] getenv                          
      18.65%  noploop  ld-2.15.so          [.] calloc                          
       6.05%  noploop  ld-2.15.so          [.] rtld_lock_default_lock_recursive
       2.92%  noploop  noploop             [.] main                            
       1.41%  noploop  [kernel.kallsyms]   [k] page_fault                      
       1.32%  noploop  libc-2.15.so        [.] execvp                          
       0.32%  noploop  libc-2.15.so        [.] __run_exit_handlers             
       0.32%  noploop  ld-2.15.so          [.] _dl_fini                        
       0.12%  noploop  perf                [.] perf_evlist__prepare_workload   
       0.12%  noploop  ld-2.15.so          [.] _dl_start                       
       0.02%  noploop  libpthread-2.15.so  [.] __read_nocancel                 
       0.02%  noploop  ld-2.15.so          [.] _start                          
  

With --group:

  $ ./perf report --group --stdio
  ...
  # Samples: 4K of event 'cycles:u'
  # Event count (approx.): 7384367810
  #
  # Overhead  Command       Shared Object                   Symbol
  # ........  .......  ..................  .......................
  #
      99.98%  noploop  noploop             [.] main               
       0.01%  noploop  ld-2.15.so          [.] _dl_relocate_object
       0.00%  noploop  [kernel.kallsyms]   [k] page_fault         
       0.00%  noploop  libc-2.15.so        [.] __execvpe          
       0.00%  noploop  libpthread-2.15.so  [.] __read_nocancel    
       0.00%  noploop  ld-2.15.so          [.] _start             
  
  
  # Samples: 49  of event 'anon_group { cache-references:u, cache-misses:u }'
  # Event count (approx.): 12770
  #
  # Overhead          Command       Shared Object                                Symbol
  # ................  .......  ..................  ....................................
  #
      55.85%  46.94%  noploop  ld-2.15.so          [.] do_lookup_x                     
      19.18%   0.00%  noploop  ld-2.15.so          [.] strlen                          
      14.09%  21.78%  noploop  libc-2.15.so        [.] getenv                          
       3.17%   0.32%  noploop  ld-2.15.so          [.] _dl_fini                        
       2.65%   2.92%  noploop  noploop             [.] main                            
       1.44%   1.41%  noploop  [kernel.kallsyms]   [k] page_fault                      
       1.44%   0.12%  noploop  perf                [.] perf_evlist__prepare_workload   
       0.99%   0.00%  noploop  [kernel.kallsyms]   [k] apic_timer_interrupt            
       0.33%   0.32%  noploop  libc-2.15.so        [.] __run_exit_handlers             
       0.33%   0.00%  noploop  noploop             [.] handler                         
       0.33%   0.00%  noploop  [kernel.kallsyms]   [k] call_function_single_interrupt  
       0.12%   0.12%  noploop  ld-2.15.so          [.] _dl_start                       
       0.05%   0.02%  noploop  libpthread-2.15.so  [.] __read_nocancel                 
       0.02%   0.02%  noploop  ld-2.15.so          [.] _start                          
       0.00%  18.65%  noploop  ld-2.15.so          [.] calloc                          
       0.00%   6.05%  noploop  ld-2.15.so          [.] rtld_lock_default_lock_recursive
       0.00%   1.32%  noploop  libc-2.15.so        [.] execvp                          
  
(Hmm.. looks like there's a bug in group event counting in the header
area. But I believe the period value itself is intact.)

You can access it via my tree as well (if you want to test it).

  git://git.kernel.org/pub/scm/linux/kernel/git/namhyung/linux-perf.git perf/report-group-v1

Any comments are welcome, thanks.
Namhyung

[1] https://lkml.org/lkml/2012/7/19/263
[2] https://lkml.org/lkml/2012/7/21/75
[3] https://lkml.org/lkml/2012/7/19/419


Namhyung Kim (12):
  perf tools: Add a couple of helper routines to handle groups
  perf hist: Convert to struct he_stat
  perf hist: Collapse group hist_entries to a leader
  perf hist: Maintain total periods of group members in the leader
  perf report: Make another loop for output resorting
  perf header: Reconstruct group relationship by parsing cmdline
  perf ui/hist: Add support to group viewing
  perf ui/browser: Add support to group viewing
  perf ui/gtk: Add support to group viewing
  perf report: Show leader events only when event group is enabled
  perf report: Show group description when event group is enabled
  perf report: Add --group option

 tools/perf/builtin-report.c    |   29 +++++++
 tools/perf/ui/browsers/hists.c |  105 +++++++++++++++++++++---
 tools/perf/ui/gtk/browser.c    |   69 +++++++++++++---
 tools/perf/ui/hist.c           |  158 +++++++++++++++++++++++++++++------
 tools/perf/util/evsel.c        |   25 ++++++
 tools/perf/util/evsel.h        |   21 +++++
 tools/perf/util/header.c       |  110 +++++++++++++++++++++++++
 tools/perf/util/hist.c         |  177 ++++++++++++++++++++++++++++++++++------
 tools/perf/util/hist.h         |    1 +
 tools/perf/util/parse-events.c |    6 +-
 tools/perf/util/sort.h         |   17 ++--
 tools/perf/util/symbol.c       |    4 +
 tools/perf/util/symbol.h       |    3 +-
 13 files changed, 641 insertions(+), 84 deletions(-)

-- 
1.7.10.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ