[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <545867BC.9070000@hitachi.com>
Date: Tue, 04 Nov 2014 14:44:28 +0900
From: Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>
To: Namhyung Kim <namhyung@...nel.org>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>,
srikar@...ux.vnet.ibm.com, Peter Zijlstra <peterz@...radead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Brendan Gregg <brendan.gregg@...il.com>,
yrl.pp-manager.tt@...achi.com,
Hemant Kumar <hemant@...ux.vnet.ibm.com>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: Re: [PATCH perf/core 0/6] perf-probe: Bugfix and add new options
for cache
(2014/11/04 12:14), Namhyung Kim wrote:
> Hi Masami,
>
> On Fri, 31 Oct 2014 14:51:29 -0400, Masami Hiramatsu wrote:
>> Hi,
>>
>> Here is a sereis of patches for enabling "event cache" feature
>> to perf probe. Brendan gives me this cool idea, thanks! :)
>>
>> In this series, I added following options/features;
>> - --output option
>> We can save the probe definition command for given probe-event
>> instead of setting up the local tracing/kprobe_events.
>>
>> - --no-inlines option
>> We can avoid searching the inline functions in debuginfo. Usually
>> useful with wildcards since the wildcards will hit a huge amount
>> of probe-points.
>>
>> - $params special probe argument
>> $params is expanded to function parameters only, no locally defined
>> variables. This is useful for function-call tracing.
>>
>> - wildcard support for function name
>> Wildcard support is the key feature for this idea. Now we can use
>> '*foo*' for function name to define the probe-point.
>>
>> So by using all of them, we can make an "event cache" file on all
>> functions (except for inlined functions) as below.
>>
>> # perf probe --max-probes=100000 --no-inlines -a '* $params' -o event.cache
>>
>> builds "event.cache" file in which event settings for
>> all function entries, like below;
>>
>> p:probe/reset_early_page_tables _text+12980741
>> p:probe/copy_bootdata _text+12980830 real_mode_data=%di:u64
>> p:probe/exit_amd_microcode _text+14692680
>> p:probe/early_make_pgtable _text+12981274 address=%di:u64
>> p:probe/x86_64_start_reservations _text+12981700 real_mode_data=%di:u64
>> p:probe/x86_64_start_kernel _text+12981744 real_mode_data=%di:u64
>> p:probe/reserve_ebda_region _text+12982117
>
> Does this event cache support kernel modules too? AFAIK it can have a
> different address whenever loaded even on a same kernel.
Yes, for the modules perf probe uses target function symbol directly instead
of _text.
----
perf probe -m xfs -o - -q -a xfs_acl_exists
p:probe/xfs_acl_exists xfs:xfs_acl_exists+0
----
>> This event.cache file will be big (but much smaller than native
>> debuginfo :) ) if your kernel have many option embedded.
>> Anyway, you can compress it too.
>>
>> # wc -l event.cache
>> 33813 event.cache
>> # ls -sh event.cache
>> 2.3M event.cache
>> # ls -sh event.cache.gz
>> 464K event.cache.gz
>>
>> For setting up a probe event, you can grep the function name
>> and write it to tracing/kprobe_events, as below;
>>
>> # zcat event.cache.gz | \
>> grep probe/vfs_symlink > /sys/kernel/debug/tracing/kprobe_events
>>
>> This can be applied for the remote machine only if the machine
>> runs on completely same kernel binary. Perhaps, we need some
>> helper tool to check it.
>
> While it's useful for "agent-less" systems, I think we also need to have
> a simple way to apply it with perf tools.
Yeah, I see. As I've sent a reply to Arnaldo, I'd like to add --query for
cached event definition by checking a build id.
In that case, you just need to run perf-archive and send it to remote,
then you can run perf-probe --query-set "event" on the machine.
Thank you,
--
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@...achi.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists