[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171121151831.GM20440@krava>
Date: Tue, 21 Nov 2017 16:18:32 +0100
From: Jiri Olsa <jolsa@...hat.com>
To: Jin Yao <yao.jin@...ux.intel.com>
Cc: acme@...nel.org, jolsa@...nel.org, peterz@...radead.org,
mingo@...hat.com, alexander.shishkin@...ux.intel.com,
Linux-kernel@...r.kernel.org, ak@...ux.intel.com,
kan.liang@...el.com, yao.jin@...el.com
Subject: Re: [PATCH v1 8/9] perf stat: Remove --per-thread pid/tid limitation
On Mon, Nov 20, 2017 at 10:43:43PM +0800, Jin Yao wrote:
> Currently, if we execute 'perf stat --per-thread' without specifying
> pid/tid, perf will return error.
>
> root@skl:/tmp# perf stat --per-thread
> The --per-thread option is only available when monitoring via -p -t options.
> -p, --pid <pid> stat events on existing process id
> -t, --tid <tid> stat events on existing thread id
>
> This patch removes this limitation. If no pid/tid specified, it returns
> all threads (get threads from /proc).
>
> Signed-off-by: Jin Yao <yao.jin@...ux.intel.com>
> ---
> tools/perf/builtin-stat.c | 23 +++++++++++++++--------
> tools/perf/util/target.h | 7 +++++++
> 2 files changed, 22 insertions(+), 8 deletions(-)
>
> diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> index 9eec145..2d718f7 100644
> --- a/tools/perf/builtin-stat.c
> +++ b/tools/perf/builtin-stat.c
> @@ -277,7 +277,7 @@ static int create_perf_stat_counter(struct perf_evsel *evsel)
> attr->enable_on_exec = 1;
> }
>
> - if (target__has_cpu(&target))
> + if (target__has_cpu(&target) && !target__has_per_thread(&target))
please add comment on why this is needed..
> return perf_evsel__open_per_cpu(evsel, perf_evsel__cpus(evsel));
>
> return perf_evsel__open_per_thread(evsel, evsel_list->threads);
> @@ -340,7 +340,7 @@ static int read_counter(struct perf_evsel *counter)
> int nthreads = thread_map__nr(evsel_list->threads);
> int ncpus, cpu, thread;
>
> - if (target__has_cpu(&target))
> + if (target__has_cpu(&target) && !target__has_per_thread(&target))
same here
thanks,
jirka
Powered by blists - more mailing lists