[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <526F862E.9060203@linux.vnet.ibm.com>
Date: Tue, 29 Oct 2013 15:25:58 +0530
From: Hemant Kumar <hkshaw@...ux.vnet.ibm.com>
To: Pekka Enberg <penberg@....fi>
CC: David Ahern <dsahern@...il.com>,
Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Oleg Nesterov <oleg@...hat.com>,
"hegdevasant@...ux.vnet.ibm.com" <hegdevasant@...ux.vnet.ibm.com>,
Ingo Molnar <mingo@...hat.com>,
"anton@...hat.com" <anton@...hat.com>,
"systemtap@...rceware.org" <systemtap@...rceware.org>,
Namhyung Kim <namhyung@...nel.org>,
Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>,
"aravinda@...ux.vnet.ibm.com" <aravinda@...ux.vnet.ibm.com>
Subject: Re: [PATCH v4 2/3] Support for perf to probe into SDT markers:
On 10/29/2013 12:15 AM, Pekka Enberg wrote:
[...]
> If you build a cache of DSOs and executables that have SDT makers (with a
> SHA1 hash), the cache size bound by SDT marker annotated files. You
> probably can then unconditionally scan the cached filenames for SDT
> markers for 'perf list'. And once you see a SHA1 mismatch, you either
> rescan automatically or explain to the user that:
>
> SDT marker cache needs to be updated. Please run 'perf list --scan'.
>
> Transparently supporting SDT markers as events for 'perf trace -e' and
> others is slightly more tricky because you probably don't want to scan
> the files for every 'perf trace' invocation. However, you can probably
> get really far with a 1024-entry SDT marker cache that's separate from
> the 'executables and DSOs with SDT markers' cache. So whenever the user
> does something like
>
> perf trace -e libc:setjmp sleep 1
>
> The 'libc:setjmp' ends up in the 1024-entry cache (or whatever makes
> most sense) that points directly to SDT marker so we can hook into it
> quickly. Using simple LRU eviction policy, you end up pushing out the
> uninteresting SDT markers and keeping the ones that are used all the
> time.
So, what I understand is that we need to implement it this way (Please do
correct me if I am wrong !!) :
Upon invoking "perf list" / "perf list --sdt" for the first time by the
user, the
executables and dsos (in PATH and /usr/lib*) should be searched for SDT
markers. All these markers along with their one-to-one mapping with the
files can be stored in a "cache" where each entry can be like -
[ sdt_marker : provider : FQN : buildid : location ],
where, "sdt_marker" and "provider" shall be the marker name and provider
names present in the SDT notes' description, FQN shall be the absolute path
of the binary and "location" will be the location of the SDT marker
inside the binary.
Subsequent invocations of "perf list" / "perf list --sdt" shall read
this cache and
display the info. If we need to update the list we can use 'perf list
--scan".
So, now if we use "perf record -e prov:mark -aR sleep 10", it should go
through the list and find out the matched markers and if they have multiple
matches, we probe markers in all of the matched entries. Whenever a match
is found, the FQN can be used to find the binary, match the buildid (we need
to confirm that the binary didn't change since last "perf list" / "perf
list --scan")
and then confirm the presence of marker and the location. And then go on
with the probing and recording.
There shouldn't be any "perf probe " in between.
That surely makes the task of a user a lot easier!
However, there are some issues which are likely to come up while
implementing in
the above way:
1. Where this cache should be? Keeping it in tracing directory inside
the debugfs
directory should seem more feasible. And, shall this cache be shareable?
2. perf record is a performance intensive process, can we allow the
delay due to
this searching process here?
etc.
--
Thanks
Hemant Kumar
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists