lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 10 Jun 2021 15:39:06 +0300
From:   Andy Shevchenko <andy.shevchenko@...il.com>
To:     David Gow <davidgow@...gle.com>
Cc:     Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
        André Almeida <andrealmeid@...labora.com>,
        Christoph Hellwig <hch@....de>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Brendan Higgins <brendanhiggins@...gle.com>,
        "open list:KERNEL SELFTEST FRAMEWORK" 
        <linux-kselftest@...r.kernel.org>,
        KUnit Development <kunit-dev@...glegroups.com>,
        Shuah Khan <shuah@...nel.org>, ~lkcamp/patches@...ts.sr.ht,
        nfraprado@...labora.com, leandro.ribeiro@...labora.com,
        Vitor Massaru Iha <vitor@...saru.org>, lucmaga@...il.com,
        Daniel Latypov <dlatypov@...gle.com>, tales.aparecida@...il.com
Subject: Re: [PATCH v2 0/1] lib: Convert UUID runtime test to KUnit

On Thu, Jun 10, 2021 at 2:54 PM David Gow <davidgow@...gle.com> wrote:
> On Thu, Jun 10, 2021 at 5:14 PM Andy Shevchenko
> <andriy.shevchenko@...ux.intel.com> wrote:
> > On Wed, Jun 09, 2021 at 08:37:29PM -0300, André Almeida wrote:

...

> Note that this output is from the kunit_tool script, which parses the
> test output.
> It does include a summary line:
> [04:41:01] Testing complete. 4 tests run. 0 failed. 0 crashed.

> Note that this does only count the number of "tests" run --- the
> individual UUIDs are parameters to the same test, so aren't counted
> independently by the wrapper at the moment.
>
> That being said, the raw output looks like this (all tests passed):
> TAP version 14
> 1..1
>    # Subtest: uuid
>    1..4
>    # uuid_correct_be: ok 1 - c33f4995-3701-450e-9fbf-206a2e98e576
>    # uuid_correct_be: ok 2 - 64b4371c-77c1-48f9-8221-29f054fc023b
>    # uuid_correct_be: ok 3 - 0cb4ddff-a545-4401-9d06-688af53e7f84
>    ok 1 - uuid_correct_be
>    # uuid_correct_le: ok 1 - c33f4995-3701-450e-9fbf-206a2e98e576
>    # uuid_correct_le: ok 2 - 64b4371c-77c1-48f9-8221-29f054fc023b
>    # uuid_correct_le: ok 3 - 0cb4ddff-a545-4401-9d06-688af53e7f84
>    ok 2 - uuid_correct_le
>    # uuid_wrong_be: ok 1 - c33f4995-3701-450e-9fbf206a2e98e576
>    # uuid_wrong_be: ok 2 - 64b4371c-77c1-48f9-8221-29f054XX023b
>    # uuid_wrong_be: ok 3 - 0cb4ddff-a545-4401-9d06-688af53e
>    ok 3 - uuid_wrong_be
>    # uuid_wrong_le: ok 1 - c33f4995-3701-450e-9fbf206a2e98e576
>    # uuid_wrong_le: ok 2 - 64b4371c-77c1-48f9-8221-29f054XX023b
>    # uuid_wrong_le: ok 3 - 0cb4ddff-a545-4401-9d06-688af53e
>    ok 4 - uuid_wrong_le
> ok 1 - uuid
>
> A test which failed could look like this:
>     # uuid_correct_le: ASSERTION FAILED at lib/test_uuid.c:46
>    Expected guid_parse(data->uuid, &le) == 0, but
>        guid_parse(data->uuid, &le) == -22
>
> failed to parse 'c33f499x5-3701-450e-9fbf-206a2e98e576'
>    # uuid_correct_le: not ok 1 - c33f499x5-3701-450e-9fbf-206a2e98e576
>    # uuid_correct_le: ok 2 - 64b4371c-77c1-48f9-8221-29f054fc023b
>    # uuid_correct_le: ok 3 - 0cb4ddff-a545-4401-9d06-688af53e7f84
>    not ok 2 - uuid_correct_le
>
> >
> > Thanks!
> >
> > It's not your fault but I think we need to defer this until KUnit gains support
> > of the run statistics. My guts telling me if we allow more and more conversions
> > like this the point will vanish and nobody will care.
>
> Did the test statistics patch we sent out before meet your expectations?
> https://patchwork.kernel.org/project/linux-kselftest/patch/20201211072319.533803-1-davidgow@google.com/

Let me look at it at some point.

> If so, we can tidy it up and try to push it through straight away, we
> were just waiting for a review from someone who wanted the feature.
>
>
> > I like the code, but I can give my tag after KUnit prints some kind of this:
> >
> >  * This is how the current output looks like in success:
> >
> >    test_uuid: all 18 tests passed
> >
> >  * And when it fails:
> >
> >    test_uuid: failed 18 out of 18 tests
> >
>
> There are some small restrictions on the exact format KUnit can use
> for this if we want to continue to match the (K)TAP specification
> which is being adopted by kselftest. The patch linked above should
> give something formatted like:
>
> # test_uuid: (0 / 4) tests failed (0 / 12 test parameters)
>
> Would that work for you?

Can you decode it for me, please?

(Assuming that the above question arisen, perhaps some rephrasing is
needed. The idea that user should have clear understanding on how many
test cases were run and how many of them successfully finished or
failed. According to this thread I have to see the cumulative number
of 18 (either as one number or sum over test cases or how you call
them, I see 4 here).



-- 
With Best Regards,
Andy Shevchenko

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ