[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160113095738.GA9092@gmail.com>
Date: Wed, 13 Jan 2016 10:57:38 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Arnaldo Carvalho de Melo <acme@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Stephane Eranian <eranian@...gle.com>,
Namhyung Kim <namhyung@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
Jiri Olsa <jolsa@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>,
"ak@...ux.intel.com" <ak@...ux.intel.com>
Subject: Re: [RFC] perf record: missing buildid for callstack modules
* Arnaldo Carvalho de Melo <acme@...nel.org> wrote:
> Em Tue, Jan 12, 2016 at 11:39:43AM +0100, Ingo Molnar escreveu:
> > But perf tooling cares very much: it can lead to subtle bugs and bad data if we
> > display a profile with the wrong DSO or binary. 'Bad' profiles resulting out of
> > binary mismatch can be very convincing and can send developers down the wrong path
> > for hours. I'd expect my tooling to not do that.
>
> > Path names alone (the thing that exec() cares about) are not unique enough to
> > identify the binary that was profiled. So we need a content hash - hence the
> > build-ID.
>
> > Can you suggest a better solution than a build-time calculated content hash?
>
> > As for binary formats that suck and don't allow for a content hash: we do our
> > best, but of course the risk of data mismatch is there. We could perhaps cache the
> > binary inode's mtime field to at least produce a 'profile data is older than
> > binary/DSO modification date!' warning. (Which check won't catch all cases, like
> > cross-system profiling data matches.)
>
> So, we could think of this as: binary formats that want to aid
> observability tools to:
>
> 1) Detect mismatches in contents for DSOs present at recording time to
> those to be used at analysis time.
>
> 2) Find symtabs, DSO binary contents, CFI tables, present in the DSO
> where samples were taken.
>
> Using mtime, as suggested in other messages will help with #1, but not
> with #2.
But but ... why is #2 a problem with mtime? If we have an out of date record in
the perf.data, then the perf.data is uninteresting in 99% of the usecases! It's
out of date, most likely because the binary the developer is working on got
rebuilt, or the system got upgraded - in both cases the developer does not care
about the old records anymore...
What matters is #1, to detect mismatches, to be a reliable tool. Once we've
detected that, we can inform the user and our job is mostly done.
But reliable != perfect time machine. Really, #2 is a second, third order concern
that should never cause slowdowns on the magnitude that Peter is complaining
about!
I realize that there might be special workflows (such as system-wide monitoring)
where collecting at recording time might be useful, but those are not the common
case - and they should not slow down the common case.
> Checking for inefficiencies in the current approach of
> right-after-recording post-processing looking for PERF_RECORD_MMAPs,
> Adrian suggested something here, also disabling the saving into
> ~/.debug/ will help, collecting numbers would be great.
I think Peter mentioned a number: the kernel build time almost _doubles_ with
this. That's clearly unacceptable.
> But the mtime thing also requires traversing the whole perf.data
> contents looking for those paths in PERF_RECORD_MMAP records.
But why? Why cannot we do it at perf report time, when we will parse them anyway?
Thanks,
Ingo
Powered by blists - more mailing lists