lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 21 Mar 2019 18:30:38 -0700
From:   Brendan Higgins <brendanhiggins@...gle.com>
To:     Frank Rowand <frowand.list@...il.com>
Cc:     Greg KH <gregkh@...uxfoundation.org>,
        Kees Cook <keescook@...gle.com>,
        Luis Chamberlain <mcgrof@...nel.org>, shuah@...nel.org,
        Joel Stanley <joel@....id.au>,
        Michael Ellerman <mpe@...erman.id.au>,
        Joe Perches <joe@...ches.com>, brakmo@...com,
        Steven Rostedt <rostedt@...dmis.org>,
        "Bird, Timothy" <Tim.Bird@...y.com>,
        Kevin Hilman <khilman@...libre.com>,
        Julia Lawall <julia.lawall@...6.fr>,
        linux-kselftest@...r.kernel.org, kunit-dev@...glegroups.com,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Jeff Dike <jdike@...toit.com>,
        Richard Weinberger <richard@....at>,
        linux-um@...ts.infradead.org, Daniel Vetter <daniel@...ll.ch>,
        dri-devel <dri-devel@...ts.freedesktop.org>,
        Rob Herring <robh@...nel.org>,
        Dan Williams <dan.j.williams@...el.com>,
        linux-nvdimm <linux-nvdimm@...ts.01.org>,
        Kieran Bingham <kieran.bingham@...asonboard.com>,
        Knut Omang <knut.omang@...cle.com>
Subject: Re: [RFC v3 18/19] of: unittest: split out a couple of test cases
 from unittest

On Thu, Mar 21, 2019 at 5:22 PM Frank Rowand <frowand.list@...il.com> wrote:
>
> On 2/27/19 7:52 PM, Brendan Higgins wrote:
> > On Wed, Feb 20, 2019 at 12:45 PM Frank Rowand <frowand.list@...il.com> wrote:
> >>
> >> On 2/18/19 2:25 PM, Frank Rowand wrote:
> >>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
> >>>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list@...il.com> wrote:
> >>>>>
> >>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
> >>>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list@...il.com> wrote:
> >>>>>>>
> >>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
> >>>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list@...il.com> wrote:
< snip >
> >>
> >> In the base version, the order of execution of the test code requires
> >> bouncing back and forth between the test functions and the coding of
> >> of_test_find_node_by_name_cases[].
> >
> > You shouldn't need to bounce back and forth because the order in which
> > the tests run shouldn't matter.
>
> If one can't guarantee total independence of all of the tests, with no
> side effects, then yes.  But that is not my world.  To make that
> guarantee, I would need to be able to run just a single test in an
> entire test run.
>
> I actually want to make side effects possible.  Whether from other
> tests or from live kernel code that is accessing the live devicetree.
> Any extra stress makes me happier.
>
> I forget the exact term that has been tossed around, but to me the
> devicetree unittests are more like system validation, release tests,
> acceptance tests, and stress tests.  Not unit tests in the philosophy
> of KUnit.

Ah, I understand. I thought that they were actually trying to be unit
tests; that pretty much voids this discussion then. Integration tests
and end to end tests are valuable as long as that is actually what you
are trying to do.

>
> I do see the value of pure unit tests, and there are rare times that
> my devicetree use case might be better served by that approach.  But
> if so, it is very easy for me to add a simple pure test when debugging.
> My general use case does not map onto this model.

Why do you think it is rare that you would actually want unit tests?

I mean, if you don't get much code churn, then maybe it's not going to
provide you a ton of value to immediately go and write a bunch of unit
tests right now, but I can't think of a single time where it's hurt.
Unit tests, from my experience, are usually the easiest tests to
maintain, and the most helpful when I am developing.

Maybe I need to understand your use case better.

>
>
> >>
> >> In the frank version the order of execution of the test code is obvious.
> >
> > So I know we were arguing before over whether order *does* matter in
> > some of the other test cases (none in the example that you or I
> > posted), but wouldn't it be better if the order of execution didn't
> > matter? If you don't allow a user to depend on the execution of test
> > cases, then arguably these test case dependencies would never form and
> > the order wouldn't matter.
>
> Reality intrudes.  Order does matter.
>
>
> >>
> >> It is possible that a test function could be left out of
> >> of_test_find_node_by_name_cases[], in error.  This will result in a compile
> >> warning (I think warning instead of error, but I have not verified that)
> >> so it might be caught or it might be overlooked.
> >>
> >> The base version is 265 lines.  The frank version is 208 lines, 57 lines
> >> less.  Less is better.
> >
> > I agree that less is better, but there are different kinds of less to
> > consider. I prefer less logic in a function to fewer lines overall.
> >
> > It seems we are in agreement that test cases should be small and
> > simple, so I won't dwell on that point any longer. I agree that the
>
> As a general guide for simple unit tests, sure.
>
> For my case, no.  Reality intrudes.
>
> KUnit has a nice architectural view of what a unit test should be.

Cool, I am glad you think so! That actually means a lot to me. I was
afraid I wasn't conveying the idea properly and that was the root of
this debate.

>
> The existing devicetree "unittests" are not such unit tests.  They
> simply share the same name.
>
> The devicetree unittests do not fit into a clean:
>   - initialize
>   - do one test
>   - clean up
> model.
>
> Trying to force them into that model will not work.  The initialize
> is not a simple, easy to decompose thing.  And trying to decompose
> it can actually make the code more complex and messier.
>
> Clean up can NOT occur, because part of my test validation is looking
> at the state of the device tree after the tests complete, viewed
> through the /proc/device-tree/ interface.
>

Again, if they are not actually intended to be unit tests, then I
think that is fine.

< snip >

> > Compare the test cases for adding of_test_dynamic_basic,
> > of_test_dynamic_add_existing_property,
> > of_test_dynamic_modify_existing_property, and
> > of_test_dynamic_modify_non_existent_property to the originals. My
> > version is much longer overall, but I think is still much easier to
> > understand. I can say from when I was trying to split this up in the
> > first place, it was not obvious what properties were expected to be
> > populated as a precondition for a given test case (except the first
> > one of course). Whereas, in my version, it is immediately obvious what
> > the preconditions are for a test case. I think you can apply this same
> > logic to the examples you provided, in frank version, I don't
> > immediately know if one test cases does something that is a
> > precondition for another test case.
>
> Yes, that is a real problem in the current code, but easily fixed
> with comments.

I think it is best when you don't need comments, but in this case, I
think I have to agree with you.

>
>
> > My version also makes it easier to run a test case entirely by itself
> > which is really valuable for debugging purposes. A common thing that
> > happens when you have lots of unit tests is something breaks and lots
> > of tests fail. If the test cases are good, there should be just a
> > couple (ideally one) test cases that directly assert the violated
> > property; those are the test cases you actually want to focus on, the
> > rest are noise for the purposes of that breakage. In my version, it is
> > much easier to turn off the test cases that you don't care about and
> > then focus in on the ones that exercise the violated property.
> >
> > Now I know that, hermeticity especially, but other features as well
> > (test suite summary, error on unused test case function, etc) are not
> > actually in KUnit as it is under consideration here. Maybe it would be
> > best to save these last two patches (18/19, and 19/19) until I have
> > these other features checked in and reconsider them then?
>
> Thanks for leaving 18/19 and 19/19 off in v4.

Sure, no problem. It was pretty clear that it was a waste of both of
our times to continue discussing those at this juncture. :-)

Do you still want me to try to convert the DT not-exactly-unittest to
KUnit? I would kind of prefer (I don't feel *super* strongly about the
matter) we don't call it that since I was intending for it to be the
flagship initial example, but I certainly don't mind trying to clean
this patch up to get it up to snuff. It's really just a question of
whether it is worth it to you.

< snip >

Cheers!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ