lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 2 Dec 2020 14:22:19 +0200
From:   Andy Shevchenko <andy.shevchenko@...il.com>
To:     David Gow <davidgow@...gle.com>
Cc:     Arpitha Raghunandan <98.arpi@...il.com>,
        Brendan Higgins <brendanhiggins@...gle.com>,
        Shuah Khan <skhan@...uxfoundation.org>,
        KUnit Development <kunit-dev@...glegroups.com>,
        "open list:KERNEL SELFTEST FRAMEWORK" 
        <linux-kselftest@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        linux-kernel-mentees@...ts.linuxfoundation.org
Subject: Re: [PATCH v3] lib: Convert test_hexdump.c to KUnit

On Wed, Dec 2, 2020 at 1:57 PM David Gow <davidgow@...gle.com> wrote:
> On Wed, Dec 2, 2020 at 6:06 PM Andy Shevchenko
> <andy.shevchenko@...il.com> wrote:
> > On Wed, Dec 02, 2020 at 09:51:19AM +0530, Arpitha Raghunandan wrote:

...

> > What I;m talking about is the output. How it will be implemented (using the
> > same variable or differently) is up to you. So the point is I want to see the
> > statistics of success/total at the end.
> >
> > I think this should be done in KUNIT rather than in the individual test cases.
>
> I tend to agree here that this really is something for KUnit. At the
> moment, the tools/testing/kunit/kunit.py script will parse the kernel
> log and generate these sorts of statistics. I know that needing to run
> it through a script might seem like a step backwards, but there's no
> formal place for statistics in the KTAP specification[1] being worked
> on to standardise kselftest/kunit output formats.

Then it sucks. Fix specification (in a long term) and does it have a
comment style of messages that we can have this statistics printed
(but maybe not parsed)?

> Note that there are
> other parsers for TAP-like formats which are being used with KUnit
> results, so systems like LAVA could also sum up these statistics. It's
> also possible, as Arpitha alluded to, to have the test dump them out
> as a comment.

Fine to me.

> This won't actually work for this test as-is, though, as the KUnit
> version is running as a single giant test case (so KUnit believes that
> 1/1 tests have passed, rather than having any more-detailed
> statistics). It looks like there are a few ways to split it up a bit
> which would make it neater (a test each for the for() loops in
> test_hexdump_init() seems sensible to me), but at the moment, there's
> not really a way of programmatically generating test cases which KUnit
> then counts

Fix it, please. We rely on this statistics pretty much.

> The "Parameterised Tests"[2] work Arpitha has been working on ought to
> go some way to helping here, though it won't solve this completely in
> this initial version. The problem there is that parameterised tests
> are not reported individually in a way the kunit.py parser can report
> cleanly, yet, so it'll still only be counted as one test until that's
> changed (though, at least, that shouldn't require any test-specific
> work).
>
> My suggestion for the ultimate state of the test would be:
> - Split up the test into separate KUnit tests for the different
> "categories" of tests: (e.g., test_hexdump_set,
> test_hexdump_overflow_set_ascii, etc)
> - Replace the for loops in test_hexdump_init() with parameters, so
> that KUnit is aware of the original runs.
> - Once KUnit and the tooling supports it, these will be reported as
> subtests. (In the meantime, the results will be listed individually,
> commented out)

I'm fine as long as we have this information printed to the user.

> Of course, it'll take a while before all of those KUnit pieces are in
> place. I personally think that a good compromise would be to just do
> the first of these for now, which would make kunit_tool give at least
> a 4/4 rather than 1/1 result. Then, once the parameterised testing
> work is merged (and perhaps the tooling fixes are finished), the tests
> could be updated to take advantage of that.

How can we guarantee it will be not forgotten?

> [1]: https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD830@CY4PR13MB1175.namprd13.prod.outlook.com/T/
> [2]: https://lore.kernel.org/linux-kselftest/20201116054035.211498-1-98.arpi@gmail.com/

-- 
With Best Regards,
Andy Shevchenko

Powered by blists - more mailing lists