[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181016162554.GD6631@krava>
Date: Tue, 16 Oct 2018 18:25:54 +0200
From: Jiri Olsa <jolsa@...hat.com>
To: Stephane Eranian <eranian@...gle.com>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>,
Jiri Olsa <jolsa@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Subject: Re: [BUG] perf stat: hangs with -p and process completes
On Tue, Oct 16, 2018 at 01:03:41PM +0200, Jiri Olsa wrote:
> On Fri, Oct 12, 2018 at 02:26:09PM -0700, Stephane Eranian wrote:
> > Hi,
> >
> > I am running into a perf stat issue with the -p option which allows you
> > to attach to a running process. If that process happens to terminate
> > while under monitoring
> > perf hangs in there and never terminates. The proper behavior would be to stop.
> > I can see the issue in that the attached process is not a child, so
> > wait() would not work.
> >
> > To reproduce:
> > $ sleep 10 &
> > $ perf stat -p $!
> >
> > doing the same with perf record works, so there is a solution to this problem.
>
> yea, we don't poll for the event state change in perf stat,
> but we do that in perf record.. also because the perf poll
> code in kernel is originaly meant for tracking the ring
> buffer state
>
> maybe we could return EPOLLIN for alive events without ring
> buffer.. like below (totaly untested) and add polling for
> event state into perf stat
>
> cc-ing perf folks
>
on the second thought attached patch works as well
without kernel change
jirka
---
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index b86aba1c8028..d1028d7755bb 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -409,6 +409,28 @@ static struct perf_evsel *perf_evsel__reset_weak_group(struct perf_evsel *evsel)
return leader;
}
+static bool is_target_alive(struct target *_target,
+ struct thread_map *threads)
+{
+ struct stat st;
+ int i;
+
+ if (!target__has_task(_target))
+ return true;
+
+ for (i = 0; i < threads->nr; i++) {
+ char path[PATH_MAX];
+
+ scnprintf(path, PATH_MAX, "%s/%d", procfs__mountpoint(),
+ threads->map[i].pid);
+
+ if (!stat(path, &st))
+ return true;
+ }
+
+ return false;
+}
+
static int __run_perf_stat(int argc, const char **argv, int run_idx)
{
int interval = stat_config.interval;
@@ -579,6 +601,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
enable_counters();
while (!done) {
nanosleep(&ts, NULL);
+ if (!is_target_alive(&target, evsel_list->threads))
+ break;
if (timeout)
break;
if (interval) {
Powered by blists - more mailing lists