[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101208164015.GA5444@amd>
Date: Thu, 9 Dec 2010 03:40:15 +1100
From: Nick Piggin <npiggin@...nel.dk>
To: Arnaldo Carvalho de Melo <acme@...hat.com>
Cc: linux-kernel@...r.kernel.org
Subject: buggy perf callgraph output
Hi,
I get strange looking perf output with the callchain hits not sorted
properly for graph and fractal:
7.34% open [kernel.kallsyms] [k] kmem_cache_alloc
|
--- kmem_cache_alloc
|
|---0.05%-- get_empty_filp
| do_filp_open
| do_sys_open
| sys_open
| system_call
| __open_nocancel
|
---0.16%-- getname
do_sys_open
sys_open
system_call
__open_nocancel
6.17% open [kernel.kallsyms] [k] _raw_spin_lock
|
--- _raw_spin_lock
|
|---0.01%-- add_partial
| __slab_free
| kmem_cache_free
| file_free_rcu
| __rcu_process_callbacks
| rcu_process_callbacks
| __do_softirq
| call_softirq
| do_softirq
| irq_exit
| smp_apic_timer_interrupt
| apic_timer_interrupt
|
|---0.16%-- dput
| |
| --0.57%-- fput
| filp_close
| sys_close
| system_call
| __libc_close
|
|--0.60%-- link_path_walk
| do_path_lookup
| do_filp_open
| do_sys_open
| sys_open
| system_call
| __open_nocancel
|
|--0.57%-- __slab_free
| kmem_cache_free
| file_free_rcu
| __rcu_process_callbacks
| rcu_process_callbacks
| __do_softirq
With flat it looks a bit better:
7.34% open [kernel.kallsyms] [k] kmem_cache_alloc
6.09%
kmem_cache_alloc
get_empty_filp
do_filp_open
do_sys_open
sys_open
system_call
__open_nocancel
1.07%
kmem_cache_alloc
getname
do_sys_open
sys_open
system_call
__open_nocancel
6.17% open [kernel.kallsyms] [k] _raw_spin_lock
0.60%
_raw_spin_lock
link_path_walk
do_path_lookup
do_filp_open
do_sys_open
sys_open
system_call
__open_nocancel
0.57%
_raw_spin_lock
fd_install
do_sys_open
sys_open
system_call
__open_nocancel
0.57%
_raw_spin_lock
dput
fput
filp_close
sys_close
system_call
__libc_close
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists