[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120117122133.GB4959@elte.hu>
Date: Tue, 17 Jan 2012 13:21:33 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Cc: Jiri Olsa <jolsa@...hat.com>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Oleg Nesterov <oleg@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux-mm <linux-mm@...ck.org>, Andi Kleen <andi@...stfloor.org>,
Christoph Hellwig <hch@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Roland McGrath <roland@...k.frob.com>,
Thomas Gleixner <tglx@...utronix.de>,
Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>,
Arnaldo Carvalho de Melo <acme@...radead.org>,
Anton Arapov <anton@...hat.com>,
Ananth N Mavinakayanahalli <ananth@...ibm.com>,
Jim Keniston <jkenisto@...ux.vnet.ibm.com>,
Stephen Rothwell <sfr@...b.auug.org.au>
Subject: Re: [PATCH v9 3.2 7/9] tracing: uprobes trace_event interface
Have you tried to use 'perf probe' to achieve any useful
instrumentation on a real app?
I just tried out the 'glibc:free' usecase and it's barely
usable.
Firstly, the recording very frequently produces overruns:
$ perf record -e probe_libc:free -aR sleep 1
[ perf record: Woken up 169 times to write data ]
[ perf record: Captured and wrote 89.674 MB perf.data (~3917919 samples) ]
Warning:Processed 1349133 events and lost 1 chunks!
Using -m 4096 made it work better.
Adding -g for call-graph profiling caused 'perf report' to lock
up:
perf record -m 4096 -e probe_libc:free -agR sleep 1
perf report
[ loops forever ]
I've sent a testcase to Arnaldo separately. Note that perf
report --stdio appears to work.
Regular '-e cycles -g' works fine, so this is a uprobes specific
bug.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists