[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170602194910.GB31764@kernel.org>
Date: Fri, 2 Jun 2017 16:49:10 -0300
From: Arnaldo Carvalho de Melo <acme@...nel.org>
To: Milian Wolff <milian.wolff@...b.com>
Cc: Arnaldo Carvalho de Melo <acme@...hat.com>,
Linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org,
Namhyung Kim <namhyung@...nel.org>,
Jiri Olsa <jolsa@...hat.com>
Subject: Re: [PATCH 1/2] perf report: ensure the perf DSO mapping matches
what libdw sees
Em Fri, Jun 02, 2017 at 06:21:44PM +0200, Milian Wolff escreveu:
> On Freitag, 2. Juni 2017 17:23:41 CEST Arnaldo Carvalho de Melo wrote:
> > Looks ok, having both implementations matching and the callchains making
> > sense for your workloads is a good way to verify the sanity, thanks.
> > I wonder if we shouldn't somehow script this, i.e. build it with one
> > implementation, generate output from some test workload, build it with
> > the other, second output, diff it, report when not the same.
> That does sound like a good idea, but I'm unsure how to do it. Note that many
> "simple" tests work just fine. Only larger complicated workloads trigger this
> issue for me.
> One potential way to test it would be `perf archive` - i.e. I send you the
> binaries involved and then we can use perf script diffing to ensure it all
> works...
Humm, I'm trying to cook up a:
perf data filter --pid 12345 --perf-data-offset 1234567 --output perf.data.subset
to allow when finding some case like that to get a small subset of a
perf.data file with just the sample we want to get the backtrace from +
the mmaps, etc up to that point.
With that I could keep a repo of interesting perf.data files to have in
my regression tests.
- Arnaldo
Powered by blists - more mailing lists