[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1412556363-26229-1-git-send-email-namhyung@kernel.org>
Date: Mon, 6 Oct 2014 09:45:58 +0900
From: Namhyung Kim <namhyung@...nel.org>
To: Arnaldo Carvalho de Melo <acme@...nel.org>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Ingo Molnar <mingo@...nel.org>,
Paul Mackerras <paulus@...ba.org>,
Namhyung Kim <namhyung.kim@....com>,
Namhyung Kim <namhyung@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Jiri Olsa <jolsa@...hat.com>, David Ahern <dsahern@...il.com>,
Frederic Weisbecker <fweisbec@...il.com>
Subject: [PATCHSET 0/5] perf tools: Speed up dwarf callchain post-unwinding for libunwind (v4)
Hello,
This is v4 for libunwind callchain post processing speed up. It was
able to reduce 50% of processing time by using global cache provided
in libunwind. In this version, I decided to use the existing
callchain_param.record_mode instead of adding a new field in the
symbol_conf.
The patch 4 and 5 are just cleanups so that we can easily find out
that which part of code uses the thread->priv.
You can also get it from 'perf/callchain-unwind-v4' branch on my tree:
git://git.kernel.org/pub/scm/linux/kernel/git/namhyung/linux-perf.git
Thanks,
Namhyung
Namhyung Kim (5):
perf report: Set callchain_param.record_mode for future use
perf callchain: Create an address space per thread
perf callchain: Use global caching provided by libunwind
perf kvm: Use thread_{,_set}_priv helpers
perf trace: Use thread_{,_set}_priv helpers
tools/perf/builtin-kvm.c | 6 ++---
tools/perf/builtin-report.c | 7 ++++++
tools/perf/builtin-trace.c | 16 ++++++-------
tools/perf/tests/dwarf-unwind.c | 3 +++
tools/perf/util/callchain.h | 2 ++
tools/perf/util/hist.h | 2 --
tools/perf/util/thread.c | 9 +++++++
tools/perf/util/unwind-libunwind.c | 48 ++++++++++++++++++++++++++++++++++----
tools/perf/util/unwind.h | 20 ++++++++++++++++
9 files changed, 95 insertions(+), 18 deletions(-)
--
2.1.0
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists