[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1443817545-8551-11-git-send-email-acme@kernel.org>
Date: Fri, 2 Oct 2015 17:25:45 -0300
From: Arnaldo Carvalho de Melo <acme@...nel.org>
To: Ingo Molnar <mingo@...nel.org>
Cc: linux-kernel@...r.kernel.org, Kan Liang <kan.liang@...el.com>,
Andi Kleen <ak@...ux.intel.com>,
Namhyung Kim <namhyung@...nel.org>,
Arnaldo Carvalho de Melo <acme@...hat.com>
Subject: [PATCH 10/10] perf stat: Reduce min --interval-print to 10ms
From: Kan Liang <kan.liang@...el.com>
The --interval-print parameter was limited to 100ms. However, for
example, 10ms is required to do sophisticated bandwidth analysis using
uncore events.
The test shows that the overhead of the system-wide uncore monitoring
with 10ms interval is only ~2%. So this patch reduces the minimal
interval-print allowd to 10ms.
But 10ms may not work well for all cases. For example, when the
cpus/threads number is very large, for system-wide core event monitoring
the overhead could be high.
To handle this issue, a warning will be displayed when the
interval-print is set between 10ms to 100ms. So users can make a
decision according to their specific cases.
# perf stat -e uncore_imc_1/cas_count_read/ -a --interval-print 10 -- sleep 1
print interval < 100ms. The overhead percentage could be high in some
cases. Please proceed with caution.
# time counts unit events
0.010200451 0.10 MiB uncore_imc_1/cas_count_read/
0.020475117 0.02 MiB uncore_imc_1/cas_count_read/
0.030692800 0.01 MiB uncore_imc_1/cas_count_read/
0.040948161 0.02 MiB uncore_imc_1/cas_count_read/
0.051159564 0.00 MiB uncore_imc_1/cas_count_read/
Signed-off-by: Kan Liang <kan.liang@...el.com>
Acked-by: Jiri Olsa <jolsa@...nel.org>
Cc: Andi Kleen <ak@...ux.intel.com>
Cc: Namhyung Kim <namhyung@...nel.org>
Link: http://lkml.kernel.org/r/1443776674-42511-1-git-send-email-kan.liang@intel.com
[ Added warning about overhead when using sub 100ms intervals to the man page ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@...hat.com>
---
tools/perf/Documentation/perf-stat.txt | 5 +++--
tools/perf/builtin-stat.c | 13 +++++++++----
2 files changed, 12 insertions(+), 6 deletions(-)
diff --git a/tools/perf/Documentation/perf-stat.txt b/tools/perf/Documentation/perf-stat.txt
index 47469abdcc1c..4e074a660826 100644
--- a/tools/perf/Documentation/perf-stat.txt
+++ b/tools/perf/Documentation/perf-stat.txt
@@ -128,8 +128,9 @@ perf stat --repeat 10 --null --sync --pre 'make -s O=defconfig-build/clean' -- m
-I msecs::
--interval-print msecs::
- Print count deltas every N milliseconds (minimum: 100ms)
- example: perf stat -I 1000 -e cycles -a sleep 5
+Print count deltas every N milliseconds (minimum: 10ms)
+The overhead percentage could be high in some cases, for instance with small, sub 100ms intervals. Use with caution.
+ example: 'perf stat -I 1000 -e cycles -a sleep 5'
--per-socket::
Aggregate counts per processor socket for system-wide mode measurements. This
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index a96fb5c3bedb..5ef88f760b12 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -1179,7 +1179,7 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
OPT_STRING(0, "post", &post_cmd, "command",
"command to run after to the measured command"),
OPT_UINTEGER('I', "interval-print", &stat_config.interval,
- "print counts at regular interval in ms (>= 100)"),
+ "print counts at regular interval in ms (>= 10)"),
OPT_SET_UINT(0, "per-socket", &stat_config.aggr_mode,
"aggregate counts per processor socket", AGGR_SOCKET),
OPT_SET_UINT(0, "per-core", &stat_config.aggr_mode,
@@ -1332,9 +1332,14 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
thread_map__read_comms(evsel_list->threads);
if (interval && interval < 100) {
- pr_err("print interval must be >= 100ms\n");
- parse_options_usage(stat_usage, options, "I", 1);
- goto out;
+ if (interval < 10) {
+ pr_err("print interval must be >= 10ms\n");
+ parse_options_usage(stat_usage, options, "I", 1);
+ goto out;
+ } else
+ pr_warning("print interval < 100ms. "
+ "The overhead percentage could be high in some cases. "
+ "Please proceed with caution.\n");
}
if (perf_evlist__alloc_stats(evsel_list, interval))
--
2.1.0
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists