[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YaOhbfWzMv/uvKKi@krava>
Date: Sun, 28 Nov 2021 16:34:05 +0100
From: Jiri Olsa <jolsa@...hat.com>
To: Sohaib Mohamed <sohaib.amhmd@...il.com>
Cc: irogers@...gle.com, Riccardo Mancini <rickyman7@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Namhyung Kim <namhyung@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>,
Fabian Hemmer <copy@...y.sh>, linux-perf-users@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] Unbuffered output when pipe/tee to a file
On Fri, Nov 19, 2021 at 08:14:08AM +0200, Sohaib Mohamed wrote:
> The output of Perf bench gets buffered when I pipe it to a file or to
> tee, in such a way that I can see it only at the end.
>
> E.g.
> $ perf bench internals synthesize -t
> < output comes out fine after each test run >
>
> $ perf bench internals synthesize -t | tee file.txt
> < output comes out only at the end of all tests >
>
> This patch resolves this issue for 'bench' and 'test' subcommands.
I can reproduce this for bench, but not for test subcommand
other that that it makes sense to me
jirka
>
> See, also:
> $ perf bench mem all | tee file.txt
> $ perf bench sched all | tee file.txt
> $ perf bench internals all -t | tee file.txt
> $ perf bench internals all | tee file.txt
>
> Suggested-by: Riccardo Mancini <rickyman7@...il.com>
> Signed-off-by: Sohaib Mohamed <sohaib.amhmd@...il.com>
> ---
> v1 -> v2:
> - Use setvbuf(), instead of sprinkling fflush() calls and missing some places.
>
> v1: https://lore.kernel.org/linux-perf-users/20211112215313.108823-1-sohaib.amhmd@gmail.com/
> ---
> tools/perf/builtin-bench.c | 5 +++--
> tools/perf/tests/builtin-test.c | 3 +++
> 2 files changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/tools/perf/builtin-bench.c b/tools/perf/builtin-bench.c
> index d0895162c2ba..d291f3a8af5f 100644
> --- a/tools/perf/builtin-bench.c
> +++ b/tools/perf/builtin-bench.c
> @@ -226,7 +226,6 @@ static void run_collection(struct collection *coll)
> if (!bench->fn)
> break;
> printf("# Running %s/%s benchmark...\n", coll->name, bench->name);
> - fflush(stdout);
>
> argv[1] = bench->name;
> run_bench(coll->name, bench->name, bench->fn, 1, argv);
> @@ -247,6 +246,9 @@ int cmd_bench(int argc, const char **argv)
> struct collection *coll;
> int ret = 0;
>
> + /* Unbuffered output */
> + setvbuf(stdout, NULL, _IONBF, 0);
> +
> if (argc < 2) {
> /* No collection specified. */
> print_usage();
> @@ -300,7 +302,6 @@ int cmd_bench(int argc, const char **argv)
>
> if (bench_format == BENCH_FORMAT_DEFAULT)
> printf("# Running '%s/%s' benchmark:\n", coll->name, bench->name);
> - fflush(stdout);
> ret = run_bench(coll->name, bench->name, bench->fn, argc-1, argv+1);
> goto end;
> }
> diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
> index 8cb5a1c3489e..d92ae4efd2e6 100644
> --- a/tools/perf/tests/builtin-test.c
> +++ b/tools/perf/tests/builtin-test.c
> @@ -606,6 +606,9 @@ int cmd_test(int argc, const char **argv)
> if (ret < 0)
> return ret;
>
> + /* Unbuffered output */
> + setvbuf(stdout, NULL, _IONBF, 0);
> +
> argc = parse_options_subcommand(argc, argv, test_options, test_subcommands, test_usage, 0);
> if (argc >= 1 && !strcmp(argv[0], "list"))
> return perf_test__list(argc - 1, argv + 1);
> --
> 2.25.1
>
Powered by blists - more mailing lists