[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABVgOSk_kj4w74gP8og_x07AQgwaF0WkH5-hmkoEdcviQb5_tQ@mail.gmail.com>
Date: Thu, 30 Sep 2021 10:29:16 +0800
From: David Gow <davidgow@...gle.com>
To: Daniel Latypov <dlatypov@...gle.com>
Cc: Brendan Higgins <brendanhiggins@...gle.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
KUnit Development <kunit-dev@...glegroups.com>,
"open list:KERNEL SELFTEST FRAMEWORK"
<linux-kselftest@...r.kernel.org>,
Shuah Khan <skhan@...uxfoundation.org>
Subject: Re: [PATCH v3 3/4] kunit: tool: actually track how long it took to
run tests
On Thu, Sep 30, 2021 at 3:54 AM Daniel Latypov <dlatypov@...gle.com> wrote:
>
> This is a long standing bug in kunit tool.
> Since these files were added, run_kernel() has always yielded lines.
>
> That means, the call to run_kernel() returns before the kernel finishes
> executing tests, potentially before a single line of output is even
> produced.
>
> So code like this
> time_start = time.time()
> result = linux.run_kernel(...)
> time_end = time.time()
>
> would only measure the time taken for python to give back the generator
> object.
>
> From a caller's perspective, the only way to know the kernel has exited
> is for us to consume all the output from the `result` generator object.
> Alternatively, we could change run_kernel() to try and do its own book
> keeping and return the total time, but that doesn't seem worth it.
>
> This change makes us record `time_end` after we're doing parsing all the
> output (which should mean we've consumed all of it, or errored out).
> That means we're including in the parsing time as well, but that should
> be quite small, and it's better than claiming it took 0s to run tests.
>
> Let's use this as an example:
> $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit example
>
> Before:
> Elapsed time: 7.684s total, 0.001s configuring, 4.692s building, 0.000s running
>
> After:
> Elapsed time: 6.283s total, 0.001s configuring, 3.202s building, 3.079s running
>
> Signed-off-by: Daniel Latypov <dlatypov@...gle.com>
> ---
Thanks for looking into and fixing this: clearly I should've noticed
it before. :-)
Including the parsing time as well doesn't worry me: as you note, it
should be negligible.
Reviewed-by: David Gow <davidgow@...gle.com>
-- David
> tools/testing/kunit/kunit.py | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
> index 31eec9f6ecc3..5e717594df5b 100755
> --- a/tools/testing/kunit/kunit.py
> +++ b/tools/testing/kunit/kunit.py
> @@ -101,12 +101,14 @@ def exec_tests(linux: kunit_kernel.LinuxSourceTree, request: KunitExecRequest,
> filter_glob=request.filter_glob,
> build_dir=request.build_dir)
>
> + result = parse_tests(parse_request, run_result)
> +
> + # run_kernel() doesn't block on the kernel exiting.
> + # That only happens after we get the last line of output from `run_result`.
> + # So exec_time here actually contains parsing + execution time, which is fine.
> test_end = time.time()
> exec_time = test_end - test_start
>
> - # Named tuples are immutable, so we rebuild them here manually
> - result = parse_tests(parse_request, run_result)
> -
> return KunitResult(status=result.status, result=result.result, elapsed_time=exec_time)
>
> def parse_tests(request: KunitParseRequest, input_data: Iterable[str]) -> KunitResult:
> --
> 2.33.0.685.g46640cef36-goog
>
Powered by blists - more mailing lists