[<prev] [next>] [day] [month] [year] [list]
Message-Id: <1268904661.2813.170.camel@localhost>
Date: Thu, 18 Mar 2010 17:31:01 +0800
From: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To: Ingo Molnar <mingo@...e.hu>,
Arnaldo Carvalho de Melo <acme@...hat.com>
Cc: Avi Kivity <avi@...hat.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Sheng Yang <sheng@...ux.intel.com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Marcelo Tosatti <mtosatti@...hat.com>,
oerg Roedel <joro@...tes.org>,
Jes Sorensen <Jes.Sorensen@...hat.com>,
Gleb Natapov <gleb@...hat.com>,
Zachary Amsden <zamsden@...hat.com>, zhiteng.huang@...el.com
Subject: [PATCH 1/3] perf events: Enable counters when collecting
process-wide or system-wide data by 'perf stat'
Tool perf has a couple of sub commands. There are a couple of issues around
counter enabling time point. In addition, we want a precise time clock
when collect system-wide or process/thread-wide statistics.
I worked out 3 patches against tips/master tree of March 17 to fix such issues
and enhance perf to be more user-friendly.
Subject: [PATCH 1/3] perf events: Enable counters when collecting process-wide or system-wide data by 'perf stat'
From: Zhang, Yanmin <yanmin_zhang@...ux.intel.com>
Command 'perf stat' doesn't enable counters when collecting an existing (by -p) process
or a system-wide statistics. Fix the issue.
Change the condition of fork/exec subcommand. If there is a subcommand parameter,
perf always fork/exec it. The usage example is:
#perf stat -a sleep 10
So this command could collect statistics for 10 seconds precisely. User
still could stop it by CTRL+C. Without the new capability, user could only
use CTRL+C to stop it without precise time clock.
Another issue is 'perf stat -a' consumes 100% time of a full single logical cpu. It
has a bad impact on running workload. Fix it by adding a sleep(1) in the while(!done)
loop in function run_perf_stat.
Signed-off-by: Zhang Yanmin <yanmin_zhang@...ux.intel.com>
---
diff -Nraup linux-2.6_tipmaster0317/tools/perf/builtin-stat.c linux-2.6_tipmaster0317_fixstat/tools/perf/builtin-stat.c
--- linux-2.6_tipmaster0317/tools/perf/builtin-stat.c 2010-03-18 09:04:40.938289813 +0800
+++ linux-2.6_tipmaster0317_fixstat/tools/perf/builtin-stat.c 2010-03-18 13:07:26.773773541 +0800
@@ -159,8 +159,10 @@ static void create_perf_stat_counter(int
}
} else {
attr->inherit = inherit;
- attr->disabled = 1;
- attr->enable_on_exec = 1;
+ if (target_pid == -1) {
+ attr->disabled = 1;
+ attr->enable_on_exec = 1;
+ }
fd[0][counter] = sys_perf_event_open(attr, pid, -1, -1, 0);
if (fd[0][counter] < 0 && verbose)
@@ -251,9 +253,9 @@ static int run_perf_stat(int argc __used
unsigned long long t0, t1;
int status = 0;
int counter;
- int pid = target_pid;
+ int pid;
int child_ready_pipe[2], go_pipe[2];
- const bool forks = (target_pid == -1 && argc > 0);
+ const bool forks = (argc > 0);
char buf;
if (!system_wide)
@@ -265,10 +267,10 @@ static int run_perf_stat(int argc __used
}
if (forks) {
- if ((pid = fork()) < 0)
+ if ((child_pid = fork()) < 0)
perror("failed to fork");
- if (!pid) {
+ if (!child_pid) {
close(child_ready_pipe[0]);
close(go_pipe[1]);
fcntl(go_pipe[0], F_SETFD, FD_CLOEXEC);
@@ -297,8 +299,6 @@ static int run_perf_stat(int argc __used
exit(-1);
}
- child_pid = pid;
-
/*
* Wait for the child to be ready to exec.
*/
@@ -309,6 +309,10 @@ static int run_perf_stat(int argc __used
close(child_ready_pipe[0]);
}
+ if (target_pid == -1)
+ pid = child_pid;
+ else
+ pid = target_pid;
for (counter = 0; counter < nr_counters; counter++)
create_perf_stat_counter(counter, pid);
@@ -321,7 +325,7 @@ static int run_perf_stat(int argc __used
close(go_pipe[1]);
wait(&status);
} else {
- while(!done);
+ while(!done) sleep(1);
}
t1 = rdclock();
@@ -459,7 +463,7 @@ static volatile int signr = -1;
static void skip_signal(int signo)
{
- if(target_pid != -1)
+ if(child_pid == -1)
done = 1;
signr = signo;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists