lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABVgOSmn_uTZQ4OzZQM02QEbmzxvd+gJn1M8S2KGhPSEqcjW7w@mail.gmail.com>
Date:   Thu, 24 Nov 2022 16:45:08 +0800
From:   David Gow <davidgow@...gle.com>
To:     Rae Moar <rmoar@...gle.com>
Cc:     brendanhiggins@...gle.com, dlatypov@...gle.com,
        skhan@...uxfoundation.org, mauro.chehab@...ux.intel.com,
        kunit-dev@...glegroups.com, linux-kernel@...r.kernel.org,
        linux-kselftest@...r.kernel.org, isabbasso@...eup.net,
        anders.roxell@...aro.org
Subject: Re: [PATCH v3 1/2] kunit: tool: parse KTAP compliant test output

On Thu, Nov 24, 2022 at 2:26 AM Rae Moar <rmoar@...gle.com> wrote:
>
> Change the KUnit parser to be able to parse test output that complies with
> the KTAP version 1 specification format found here:
> https://kernel.org/doc/html/latest/dev-tools/ktap.html. Ensure the parser
> is able to parse tests with the original KUnit test output format as
> well.
>
> KUnit parser now accepts any of the following test output formats:
>
> Original KUnit test output format:
>
>  TAP version 14
>  1..1
>    # Subtest: kunit-test-suite
>    1..3
>    ok 1 - kunit_test_1
>    ok 2 - kunit_test_2
>    ok 3 - kunit_test_3
>  # kunit-test-suite: pass:3 fail:0 skip:0 total:3
>  # Totals: pass:3 fail:0 skip:0 total:3
>  ok 1 - kunit-test-suite
>
> KTAP version 1 test output format:
>
>  KTAP version 1
>  1..1
>    KTAP version 1
>    1..3
>    ok 1 kunit_test_1
>    ok 2 kunit_test_2
>    ok 3 kunit_test_3
>  ok 1 kunit-test-suite
>
> New KUnit test output format (changes made in the next patch of
> this series):
>
>  KTAP version 1
>  1..1
>    KTAP version 1
>    # Subtest: kunit-test-suite
>    1..3
>    ok 1 kunit_test_1
>    ok 2 kunit_test_2
>    ok 3 kunit_test_3
>  # kunit-test-suite: pass:3 fail:0 skip:0 total:3
>  # Totals: pass:3 fail:0 skip:0 total:3
>  ok 1 kunit-test-suite
>
> Signed-off-by: Rae Moar <rmoar@...gle.com>
> Reviewed-by: Daniel Latypov <dlatypov@...gle.com>
> Reviewed-by: David Gow <davidgow@...gle.com>
> ---
>

Thanks for fixing these things. This still looks good to me.

Reviewed-by: David Gow <davidgow@...gle.com>

Cheers,
-- David


> Changes since v2:
> https://lore.kernel.org/all/CA+GJov4QZ8yrD8sgGeMYJ4zYkg2CEUX8owqzPFE0BQGe_f0bFQ@mail.gmail.com/
> - Rebased onto linux-kselftest/kunit to correct merge conflict with
>   recently approved patch
> - Fixed typo
> - Added test_parse_subtest_header to test whether the “# Subtest:”
>   line is being parsed correctly when using the new test format
>
> Changes since v1:
> https://lore.kernel.org/all/20221104194705.3245738-2-rmoar@google.com/
> - Switch order of patches to make changes to the parser before making
> changes to the test output
> - Change placeholder label for test header from “Test suite” to empty
> string
> - Change parser to approve the new KTAP version line in the subtest header
> to be before the subtest header line rather than after.
> - Note: Considered changing parser to allow for the top-level of testing
> to have a '# Subtest' line as discussed in v1 but this breaks the missing
> test plan test. So I think it would be best to add this ability at a later
> time or after top-level test name and result lines are discussed for
> KTAP v2.
>
>  tools/testing/kunit/kunit_parser.py           | 79 ++++++++++++-------
>  tools/testing/kunit/kunit_tool_test.py        | 14 ++++
>  .../test_data/test_parse_ktap_output.log      |  8 ++
>  .../test_data/test_parse_subtest_header.log   |  7 ++
>  4 files changed, 80 insertions(+), 28 deletions(-)
>  create mode 100644 tools/testing/kunit/test_data/test_parse_ktap_output.log
>  create mode 100644 tools/testing/kunit/test_data/test_parse_subtest_header.log
>
> diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
> index d0ed5dd5cfc4..4cc2f8b7ecd0 100644
> --- a/tools/testing/kunit/kunit_parser.py
> +++ b/tools/testing/kunit/kunit_parser.py
> @@ -441,6 +441,7 @@ def parse_diagnostic(lines: LineStream) -> List[str]:
>         - '# Subtest: [test name]'
>         - '[ok|not ok] [test number] [-] [test name] [optional skip
>                 directive]'
> +       - 'KTAP version [version number]'
>
>         Parameters:
>         lines - LineStream of KTAP output to parse
> @@ -449,8 +450,9 @@ def parse_diagnostic(lines: LineStream) -> List[str]:
>         Log of diagnostic lines
>         """
>         log = []  # type: List[str]
> -       while lines and not TEST_RESULT.match(lines.peek()) and not \
> -                       TEST_HEADER.match(lines.peek()):
> +       non_diagnostic_lines = [TEST_RESULT, TEST_HEADER, KTAP_START]
> +       while lines and not any(re.match(lines.peek())
> +                       for re in non_diagnostic_lines):
>                 log.append(lines.pop())
>         return log
>
> @@ -496,11 +498,15 @@ def print_test_header(test: Test) -> None:
>         test - Test object representing current test being printed
>         """
>         message = test.name
> +       if message != "":
> +               # Add a leading space before the subtest counts only if a test name
> +               # is provided using a "# Subtest" header line.
> +               message += " "
>         if test.expected_count:
>                 if test.expected_count == 1:
> -                       message += ' (1 subtest)'
> +                       message += '(1 subtest)'
>                 else:
> -                       message += f' ({test.expected_count} subtests)'
> +                       message += f'({test.expected_count} subtests)'
>         stdout.print_with_timestamp(format_test_divider(message, len(message)))
>
>  def print_log(log: Iterable[str]) -> None:
> @@ -647,7 +653,7 @@ def bubble_up_test_results(test: Test) -> None:
>         elif test.counts.get_status() == TestStatus.TEST_CRASHED:
>                 test.status = TestStatus.TEST_CRASHED
>
> -def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
> +def parse_test(lines: LineStream, expected_num: int, log: List[str], is_subtest: bool) -> Test:
>         """
>         Finds next test to parse in LineStream, creates new Test object,
>         parses any subtests of the test, populates Test object with all
> @@ -665,15 +671,32 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
>         1..4
>         [subtests]
>
> -       - Subtest header line
> +       - Subtest header (must include either the KTAP version line or
> +         "# Subtest" header line)
>
> -       Example:
> +       Example (preferred format with both KTAP version line and
> +       "# Subtest" line):
> +
> +       KTAP version 1
> +       # Subtest: name
> +       1..3
> +       [subtests]
> +       ok 1 name
> +
> +       Example (only "# Subtest" line):
>
>         # Subtest: name
>         1..3
>         [subtests]
>         ok 1 name
>
> +       Example (only KTAP version line, compliant with KTAP v1 spec):
> +
> +       KTAP version 1
> +       1..3
> +       [subtests]
> +       ok 1 name
> +
>         - Test result line
>
>         Example:
> @@ -685,28 +708,29 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
>         expected_num - expected test number for test to be parsed
>         log - list of strings containing any preceding diagnostic lines
>                 corresponding to the current test
> +       is_subtest - boolean indicating whether test is a subtest
>
>         Return:
>         Test object populated with characteristics and any subtests
>         """
>         test = Test()
>         test.log.extend(log)
> -       parent_test = False
> -       main = parse_ktap_header(lines, test)
> -       if main:
> -               # If KTAP/TAP header is found, attempt to parse
> +       if not is_subtest:
> +               # If parsing the main/top-level test, parse KTAP version line and
>                 # test plan
>                 test.name = "main"
> +               ktap_line = parse_ktap_header(lines, test)
>                 parse_test_plan(lines, test)
>                 parent_test = True
>         else:
> -               # If KTAP/TAP header is not found, test must be subtest
> -               # header or test result line so parse attempt to parser
> -               # subtest header
> -               parent_test = parse_test_header(lines, test)
> +               # If not the main test, attempt to parse a test header containing
> +               # the KTAP version line and/or subtest header line
> +               ktap_line = parse_ktap_header(lines, test)
> +               subtest_line = parse_test_header(lines, test)
> +               parent_test = (ktap_line or subtest_line)
>                 if parent_test:
> -                       # If subtest header is found, attempt to parse
> -                       # test plan and print header
> +                       # If KTAP version line and/or subtest header is found, attempt
> +                       # to parse test plan and print test header
>                         parse_test_plan(lines, test)
>                         print_test_header(test)
>         expected_count = test.expected_count
> @@ -721,7 +745,7 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
>                 sub_log = parse_diagnostic(lines)
>                 sub_test = Test()
>                 if not lines or (peek_test_name_match(lines, test) and
> -                               not main):
> +                               is_subtest):
>                         if expected_count and test_num <= expected_count:
>                                 # If parser reaches end of test before
>                                 # parsing expected number of subtests, print
> @@ -735,20 +759,19 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
>                                 test.log.extend(sub_log)
>                                 break
>                 else:
> -                       sub_test = parse_test(lines, test_num, sub_log)
> +                       sub_test = parse_test(lines, test_num, sub_log, True)
>                 subtests.append(sub_test)
>                 test_num += 1
>         test.subtests = subtests
> -       if not main:
> +       if is_subtest:
>                 # If not main test, look for test result line
>                 test.log.extend(parse_diagnostic(lines))
> -               if (parent_test and peek_test_name_match(lines, test)) or \
> -                               not parent_test:
> -                       parse_test_result(lines, test, expected_num)
> -               else:
> +               if test.name != "" and not peek_test_name_match(lines, test):
>                         test.add_error('missing subtest result line!')
> +               else:
> +                       parse_test_result(lines, test, expected_num)
>
> -       # Check for there being no tests
> +       # Check for there being no subtests within parent test
>         if parent_test and len(subtests) == 0:
>                 # Don't override a bad status if this test had one reported.
>                 # Assumption: no subtests means CRASHED is from Test.__init__()
> @@ -758,11 +781,11 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
>
>         # Add statuses to TestCounts attribute in Test object
>         bubble_up_test_results(test)
> -       if parent_test and not main:
> +       if parent_test and is_subtest:
>                 # If test has subtests and is not the main test object, print
>                 # footer.
>                 print_test_footer(test)
> -       elif not main:
> +       elif is_subtest:
>                 print_test_result(test)
>         return test
>
> @@ -785,7 +808,7 @@ def parse_run_tests(kernel_output: Iterable[str]) -> Test:
>                 test.add_error('Could not find any KTAP output. Did any KUnit tests run?')
>                 test.status = TestStatus.FAILURE_TO_PARSE_TESTS
>         else:
> -               test = parse_test(lines, 0, [])
> +               test = parse_test(lines, 0, [], False)
>                 if test.status != TestStatus.NO_TESTS:
>                         test.status = test.counts.get_status()
>         stdout.print_with_timestamp(DIVIDER)
> diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
> index 84a08cf07242..d7f669cbf2a8 100755
> --- a/tools/testing/kunit/kunit_tool_test.py
> +++ b/tools/testing/kunit/kunit_tool_test.py
> @@ -312,6 +312,20 @@ class KUnitParserTest(unittest.TestCase):
>                 self.assertEqual(kunit_parser._summarize_failed_tests(result),
>                         'Failures: all_failed_suite, some_failed_suite.test2')
>
> +       def test_ktap_format(self):
> +               ktap_log = test_data_path('test_parse_ktap_output.log')
> +               with open(ktap_log) as file:
> +                       result = kunit_parser.parse_run_tests(file.readlines())
> +               self.assertEqual(result.counts, kunit_parser.TestCounts(passed=3))
> +               self.assertEqual('suite', result.subtests[0].name)
> +               self.assertEqual('case_1', result.subtests[0].subtests[0].name)
> +               self.assertEqual('case_2', result.subtests[0].subtests[1].name)
> +
> +       def test_parse_subtest_header(self):
> +               ktap_log = test_data_path('test_parse_subtest_header.log')
> +               with open(ktap_log) as file:
> +                       result = kunit_parser.parse_run_tests(file.readlines())
> +               self.print_mock.assert_any_call(StrContains('suite (1 subtest)'))
>
>  def line_stream_from_strs(strs: Iterable[str]) -> kunit_parser.LineStream:
>         return kunit_parser.LineStream(enumerate(strs, start=1))
> diff --git a/tools/testing/kunit/test_data/test_parse_ktap_output.log b/tools/testing/kunit/test_data/test_parse_ktap_output.log
> new file mode 100644
> index 000000000000..ccdf244e5303
> --- /dev/null
> +++ b/tools/testing/kunit/test_data/test_parse_ktap_output.log
> @@ -0,0 +1,8 @@
> +KTAP version 1
> +1..1
> +  KTAP version 1
> +  1..3
> +  ok 1 case_1
> +  ok 2 case_2
> +  ok 3 case_3
> +ok 1 suite
> diff --git a/tools/testing/kunit/test_data/test_parse_subtest_header.log b/tools/testing/kunit/test_data/test_parse_subtest_header.log
> new file mode 100644
> index 000000000000..216631092e7b
> --- /dev/null
> +++ b/tools/testing/kunit/test_data/test_parse_subtest_header.log
> @@ -0,0 +1,7 @@
> +KTAP version 1
> +1..1
> +  KTAP version 1
> +  # Subtest: suite
> +  1..1
> +  ok 1 test
> +ok 1 suite
> \ No newline at end of file
>
> base-commit: 99c8c9276be71e6bc98979e95d56cdcbe0c2454e
> --
> 2.38.1.584.g0f3c55d4c2-goog
>

Download attachment "smime.p7s" of type "application/pkcs7-signature" (4003 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ