lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <97ef912d0f6166d7e881ae9805fa1df82a3bd98d.camel@fi.rohmeurope.com>
Date:   Tue, 24 Mar 2020 09:51:06 +0000
From:   "Vaittinen, Matti" <Matti.Vaittinen@...rohmeurope.com>
To:     "andriy.shevchenko@...ux.intel.com" 
        <andriy.shevchenko@...ux.intel.com>
CC:     "tglx@...utronix.de" <tglx@...utronix.de>,
        "dan.j.williams@...el.com" <dan.j.williams@...el.com>,
        "robh+dt@...nel.org" <robh+dt@...nel.org>,
        "talgi@...lanox.com" <talgi@...lanox.com>,
        "brendanhiggins@...gle.com" <brendanhiggins@...gle.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
        "Gary.Hook@....com" <Gary.Hook@....com>,
        "devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "davidgow@...gle.com" <davidgow@...gle.com>,
        "changbin.du@...el.com" <changbin.du@...el.com>,
        "broonie@...nel.org" <broonie@...nel.org>,
        "herbert@...dor.apana.org.au" <herbert@...dor.apana.org.au>,
        "olteanv@...il.com" <olteanv@...il.com>,
        "lgirdwood@...il.com" <lgirdwood@...il.com>,
        "rdunlap@...radead.org" <rdunlap@...radead.org>,
        "yamada.masahiro@...ionext.com" <yamada.masahiro@...ionext.com>,
        "mark.rutland@....com" <mark.rutland@....com>,
        "Mutanen, Mikko" <Mikko.Mutanen@...rohmeurope.com>,
        "bp@...e.de" <bp@...e.de>,
        "mhiramat@...nel.org" <mhiramat@...nel.org>,
        "krzk@...nel.org" <krzk@...nel.org>,
        "mazziesaccount@...il.com" <mazziesaccount@...il.com>,
        "skhan@...uxfoundation.org" <skhan@...uxfoundation.org>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
        "Laine, Markus" <Markus.Laine@...rohmeurope.com>,
        "vincenzo.frascino@....com" <vincenzo.frascino@....com>,
        "sre@...nel.org" <sre@...nel.org>,
        "ardb@...nel.org" <ardb@...nel.org>,
        "linus.walleij@...aro.org" <linus.walleij@...aro.org>,
        "zaslonko@...ux.ibm.com" <zaslonko@...ux.ibm.com>,
        "uwe@...ine-koenig.org" <uwe@...ine-koenig.org>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>
Subject: Re: [PATCH v6 04/10] lib/test_linear_ranges: add a test for the
 'linear_ranges'


On Tue, 2020-03-24 at 11:14 +0200, Andy Shevchenko wrote:
> > On Tue, Mar 24, 2020 at 10:29:41AM +0200, Matti Vaittinen wrote:

> > +/* First things first. I deeply dislike unit-tests. I have seen
> > all the hell
> > + * breaking loose when people who think the unit tests are "the
> > silver bullet"
> > + * to kill bugs get to decide how a company should implement
> > testing strategy...
> > + *
> > + * Believe me, it may get _really_ ridiculous. It is tempting to
> > think that
> > + * walking through all the possible execution branches will nail
> > down 100% of
> > + * bugs. This may lead to ideas about demands to get certain % of
> > "test
> > + * coverage" - measured as line coverage. And that is one of the
> > worst things
> > + * you can do.
> > + *
> > + * Ask people to provide line coverage and they do. I've seen
> > clever tools
> > + * which generate test cases to test the existing functions - and
> > by default
> > + * these tools expect code to be correct and just generate checks
> > which are
> > + * passing when ran against current code-base. Run this generator
> > and you'll get
> > + * tests that do not test code is correct but just verify nothing
> > changes.
> > + * Problem is that testing working code is pointless. And if it is
> > not
> > + * working, your test must not assume it is working. You won't
> > catch any bugs
> > + * by such tests. What you can do is to generate a huge amount of
> > tests.
> > + * Especially if you were are asked to proivde 100% line-coverage
> > x_x. So what
> > + * does these tests - which are not finding any bugs now - do?
> > + *
> > + * They add inertia to every future development. I think it was
> > Terry Pratchet
> > + * who wrote someone having same impact as thick syrup has to
> > chronometre.
> > + * Excessive amount of unit-tests have this effect to development.
> > If you do
> > + * actually find _any_ bug from code in such environment and try
> > fixing it...
> > + * ...chances are you also need to fix the test cases. In sunny
> > day you fix one
> > + * test. But I've done refactoring which resulted 500+ broken
> > tests (which had
> > + * really zero value other than proving to managers that we do do
> > "quality")...
> > + *
> > + * After this being said - there are situations where UTs can be
> > handy. If you
> > + * have algorithms which take some input and should produce output
> > - then you
> > + * can implement few, carefully selected simple UT-cases which
> > test this. I've
> > + * previously used this for example for netlink and device-tree
> > data parsing
> > + * functions. Feed some data examples to functions and verify the
> > output is as
> > + * expected. I am not covering all the cases but I will see the
> > logic should be
> > + * working.
> > + *
> > + * Here we also do some minor testing. I don't want to go through
> > all branches
> > + * or test more or less obvious things - but I want to see the
> > main logic is
> > + * working. And I definitely don't want to add 500+ test cases
> > that break when
> > + * some simple fix is done x_x. So - let's only add few, well
> > selected tests
> > + * which ensure as much logic is good as possible.
> 
> And why you not to dare to put this directly to KUnit documentation?

I was going to answer you that because KUnit is not my cup of tea. But,
actually - you have a valid point here. If lots of kernel code was to
be polluted by UTs it would mean that every developer who want's to
change code would need to suffer from this inertia. So actually, as
"every developer" includes also me - that kind of makes it also my cup
of tea.

> I think it's not a place (I mean this file) for a discussions like
> that.

OTOH, I trust maintainers of each area to perform some sanity checks to
tests submitted to verify their area of code. Hence I don't feel the
need (at least for now) to do any general statement about kernel
testing strategy.

I however feel somewhat responsible for code I am authoring. I usually
try to fix issues reported against it - and by minimum participate in
reviewing such changes. The current linear ranges is authored by me, so
I want to fix problems from it if such are found. This makes the
linear_ranges specific tests special to me. Thus, concerning the
linear_ranges tests - I would like to do my best that the tests here
are not hindering development.

>  I have
> in my life cases when tests help not to break working code during
> endless
> (micro-)optimizations. We have real examples with bitmap API here
> when tests
> were (and I believe still is) helpful.

I am not sure how you read my comment here but I did say that tests are
good at some places but where to add tests needs careful pondering. I
after all did these tests - I wouldn't have done so if I saw no value
in adding them.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ