lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1543036529.4680.655.camel@oracle.com>
Date:   Sat, 24 Nov 2018 06:15:29 +0100
From:   Knut Omang <knut.omang@...cle.com>
To:     Brendan Higgins <brendanhiggins@...gle.com>,
        gregkh@...uxfoundation.org, keescook@...gle.com, mcgrof@...nel.org,
        shuah@...nel.org
Cc:     brakmo@...com, richard@....at, dri-devel@...ts.freedesktop.org,
        linux-nvdimm@...ts.01.org, mpe@...erman.id.au, Tim.Bird@...y.com,
        linux-um@...ts.infradead.org, linux-kernel@...r.kernel.org,
        rostedt@...dmis.org, kieran.bingham@...asonboard.com,
        julia.lawall@...6.fr, joel@....id.au,
        linux-kselftest@...r.kernel.org, khilman@...libre.com,
        joe@...ches.com, dan.j.williams@...el.com, jdike@...toit.com,
        kunit-dev@...glegroups.com,
        Hidenori Yamaji <hidenori.yamaji@...y.com>,
        Alan Maguire <alan.maguire@...cle.com>
Subject: Re: [RFC v2 00/14] kunit: introduce KUnit, the Linux kernel unit
 testing framework

On Tue, 2018-10-23 at 16:57 -0700, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;

First thanks to Hidenori Yamaji for making me aware of these threads!

I'd like to kindly remind Brendan, and inform others who might have
missed out on it, about our (somewhat different approach) to this space
at Oracle: KTF (Kernel Test Framework). 

KTF is a product of our experience with driver testing within Oracle since 
2011, developed
as part of a project that was not made public until 2016.
During the project, we
experimented with multiple approaches to enable 
more test driven work with kernel code.
KTF is the "testing within the kernel" 
part of this. While we realize there are quite a
few testing frameworks out there, 
KTF makes it easy to run selected tests in kernel
context directly, 
and as such provides a valuable approach to unit testing.

Brendan, I regret you weren't at this year's testing and fuzzing workshop at 
LPC last week so we could have continued our discussions from last year there!

I hope we can work on this for a while longer before anything gets merged. 
Maybe it can be topic for a longer session in a future test related workshop?

Links to more info about KTF:
------
Git repo: https://github.com/oracle/ktf
Formatted docs: http://heim.ifi.uio.no/~knuto/ktf/

LWN mention from my presentation at LPC'17: https://lwn.net/Articles/735034/
Oracle blog post: https://blogs.oracle.com/linux/oracles-new-kernel-test-framework-for-linux-v2
OSS'18 presentation slides: https://events.linuxfoundation.org/wp-content/uploads/2017/12/Test-Driven-Kernel-Development-Knut-Omang-Oracle.pdf

In the documentation (see http://heim.ifi.uio.no/~knuto/ktf/introduction.html)
we present some more motivation for choices made with KTF. 
As described in that introduction, we believe in a more pragmatic approach 
to unit testing for the kernel than the classical "mock everything" approach, 
except for typical heavily algorithmic components that has interfaces simple to mock, 
such as container implementations, or components like page table traversal 
algorithms or memory allocators, where the benefit of being able to "listen" 
on the mock interfaces needed pays handsomely off.

We also used strategies to compile kernel code in user mode,
for parts of the code which seemed easy enough to mock interfaces for.
I also looked at UML back then, but dismissed it in favor of the
more lightweight approach of just compiling the code under test 
directly in user mode, with a minimal partly hand crafted, flat mock layer.

> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.

I am curious, with the intention of only running in user mode anyway, 
why not try to build upon Googletest/Googlemock (or a similar C unit 
test framework if C is desired), instead of "reinventing" 
specific kernel macros for the tests?

> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.

I think it is clearly a trade-off here: Tests run in an isolated, mocked 
environment are subject to fewer external components. But the more complex
the mock environment gets, the more likely it also is to be a source of errors, 
nondeterminism and capacity limits itself, also the mock code would typically be 
less well tested than the mocked parts of the kernel, so it is by no means any 
silver bullet, precise testing with a real kernel on real hardware is still 
often necessary and desired. 

If the code under test is fairly standalone and complex enough, building a mock
environment for it and test it independently may be worth it, but pragmatically, 
if the same functionality can be relatively easily exercised within the kernel, 
that would be my first choice.

I like to think about all sorts of testing and assertion making as adding more 
redundancy. When errors surface you can never be sure whether it is a problem 
with the test, the test framework, the environment, or an actual error, and 
all places have to be fixed before the test can pass. 

Thanks,
Knut



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ