lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121106155028.GB13629@infradead.org>
Date:	Tue, 6 Nov 2012 12:50:28 -0300
From:	Arnaldo Carvalho de Melo <acme@...stprotocols.net>
To:	Stephane Eranian <eranian@...gle.com>
Cc:	linux-kernel@...r.kernel.org, peterz@...radead.org, mingo@...e.hu,
	ak@...ux.intel.com, jolsa@...hat.com, namhyung.kim@....com
Subject: Re: [PATCH v2 14/16] perf tools: add new mem command for memory
 access profiling

Em Tue, Nov 06, 2012 at 12:44:46PM -0300, Arnaldo Carvalho de Melo escreveu:
> [root@...dy ~]# perf record -g -a -e cpu/mem-stores/
> ^C[ perf record: Woken up 25 times to write data ]
> [ perf record: Captured and wrote 7.419 MB perf.data (~324160 samples) ]
> 
> Yay, got some numbers.

But then the results out of:

$ perf mem -t load rep --stdio

Are bogus at least on the callchains:

# ========
# captured on: Tue Nov  6 12:46:21 2012
# hostname : sandy.ghostprotocols.net
# os release : 3.7.0-rc2+
# perf version : 3.7.rc4.gfaa41f
# arch : x86_64
# nrcpus online : 8
# nrcpus avail : 8
# cpudesc : Intel(R) Core(TM) i7-2920XM CPU @ 2.50GHz
# cpuid : GenuineIntel,6,42,7
# total memory : 16220228 kB
# cmdline : /home/acme/bin/perf record -g -a -e cpu/mem-stores/ 
# event : name = cpu/mem-stores/, type = 4, config = 0x2cd, config1 = 0x0, config2 = 0x0, excl_usr = 0, excl_kern = 0, excl_ho
# HEADER_CPU_TOPOLOGY info available, use -I to display
# HEADER_NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: cpu = 4, software = 1, tracepoint = 2, uncore_cbox_0 = 6, uncore_cbox_1 = 7, uncore_cbox_2 = 8, uncore_cbox_3 
# ========
#
# Samples: 98  of event 'cpu/mem-stores/'
# Total cost : 98
# Sort order : cost,mem,sym,dso,symbol_daddr,dso_daddr,snoop,tlb,locked
#
# Overhead      Samples     Cost             Memory access                                          Symbol               Share
# ........  ...........  .......  ........................  ..............................................  ..................
#
    19.39%           19      N/A                            [k] csd_unlock                                  [kernel.kallsyms] 
            |
            --- csd_unlock
               |          
               |--6242.11%-- generic_smp_call_function_single_interrupt
               |          smp_call_function_single_interrupt
               |          call_function_single_interrupt
               |          cpuidle_enter
               |          cpuidle_enter_state
               |          cpuidle_idle_call
               |          cpu_idle
               |          |          
               |          |--85.08%-- start_secondary
               |          |          
               |           --14.92%-- rest_init
               |                     start_kernel
               |                     x86_64_start_reservations
               |                     x86_64_start_kernel
               |          
               |--100.00%-- smp_call_function_single_interrupt
               |          call_function_single_interrupt
               |          cpuidle_enter
               |          cpuidle_enter_state
               |          cpuidle_idle_call
               |          cpu_idle
               |          start_secondary
                --97088126703734472704.00%-- [...]

     5.10%            5      N/A                            [k] _raw_spin_lock_irqsave                      [kernel.kallsyms] 
            |



Ideas?

- Arnaldo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ