[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7fd35df81c06f6eb319223a22e7b93f29926edb9.camel@oracle.com>
Date: Thu, 09 May 2019 13:52:15 +0200
From: Knut Omang <knut.omang@...cle.com>
To: "Theodore Ts'o" <tytso@....edu>,
Frank Rowand <frowand.list@...il.com>
Cc: Greg KH <gregkh@...uxfoundation.org>,
Brendan Higgins <brendanhiggins@...gle.com>,
keescook@...gle.com, kieran.bingham@...asonboard.com,
mcgrof@...nel.org, robh@...nel.org, sboyd@...nel.org,
shuah@...nel.org, devicetree@...r.kernel.org,
dri-devel@...ts.freedesktop.org, kunit-dev@...glegroups.com,
linux-doc@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kbuild@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org, linux-nvdimm@...ts.01.org,
linux-um@...ts.infradead.org, Alexander.Levin@...rosoft.com,
Tim.Bird@...y.com, amir73il@...il.com, dan.carpenter@...cle.com,
dan.j.williams@...el.com, daniel@...ll.ch, jdike@...toit.com,
joel@....id.au, julia.lawall@...6.fr, khilman@...libre.com,
logang@...tatee.com, mpe@...erman.id.au, pmladek@...e.com,
richard@....at, rientjes@...gle.com, rostedt@...dmis.org,
wfg@...ux.intel.com
Subject: Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit
testing framework
On Wed, 2019-05-08 at 23:20 -0400, Theodore Ts'o wrote:
> On Wed, May 08, 2019 at 07:13:59PM -0700, Frank Rowand wrote:
> > > If you want to use vice grips as a hammer, screwdriver, monkey wrench,
> > > etc. there's nothing stopping you from doing that. But it's not fair
> > > to object to other people who might want to use better tools.
> > >
> > > The reality is that we have a lot of testing tools. It's not just
> > > kselftests. There is xfstests for file system code, blktests for
> > > block layer tests, etc. We use the right tool for the right job.
> >
> > More specious arguments.
>
> Well, *I* don't think they are specious; so I think we're going to
> have to agree to disagree.
Looking at both Frank's and Ted's arguments here, I don't think you
really disagree, I just think you are having different classes of tests in mind.
In my view it's useful to think in terms of two main categories of
interesting unit tests for kernel code (using the term "unit test" pragmatically):
1) Tests that exercises typically algorithmic or intricate, complex
code with relatively few outside dependencies, or where the dependencies
are considered worth mocking, such as the basics of container data
structures or page table code. If I get you right, Ted, the tests
you refer to in this thread are such tests. I believe covering this space
is the goal Brendan has in mind for KUnit.
2) Tests that exercises interaction between a module under test and other
parts of the kernel, such as testing intricacies of the interaction of
a driver or file system with the rest of the kernel, and with hardware,
whether that is real hardware or a model/emulation.
Using your testing needs as example again, Ted, from my shallow understanding,
you have such needs within the context of xfstests (https://github.com/tytso/xfstests)
To 1) I agree with Frank in that the problem with using UML is that you still have to
relate to the complexity of a kernel run time system, while what you really want for these
types of tests is just to compile a couple of kernel source files in a normal user land
context, to allow the use of Valgrind and other user space tools on the code. The
challenge is to get the code compiled in such an environment as it usually relies on
subtle kernel macros and definitions, which is why UML seems like such an attractive
solution. Like Frank I really see no big difference from a testing and debugging
perspective of UML versus running inside a Qemu/KVM process, and I think I have an idea
for a better solution:
In the early phases of the SIF project which mention below, I did a lot of experimentation around this. My biggest challenge then was to test the driver
implementation of the pages table handling of an Intel page table compatible on-device
MMU, using a mix of page sizes, but with a few subtle limitations in the hardware. With some efforts of code generation and heavy automated use of
compiler feedback, I was able
to do that to great satisfaction, as it probably saved the project a lot of time in
debugging, and myself a lot of pain :)
To 2) most of the current xfstests (if not all?) are user space tests that do not use
extra test specific kernel code, or test specific changes to the modules under test (am I
right, Ted?) and I believe that's just as it should be: if something can be exercised well enough from user space, then that's the easier approach.
However sometimes the test cannot be made easily without interacting directly
with internal kernel interfaces, or having such interaction would greatly simplify or
increase the precision of the test. This need was the initial motivation for us to make
KTF (https://github.com/oracle/ktf, http://heim.ifi.uio.no/~knuto/ktf/index.html) which we are working on to adapt to fit naturally and in the right way
as a kernel patch set.
We developed the SIF infiniband HCA driver
(https://github.com/oracle/linux-uek/tree/uek4/qu7/drivers/infiniband/hw/sif)
and associated user level libraries in what I like to call a "pragmatically test driven"
way. At the end of the project we had quite a few unit tests, but only a small fraction of them were KTF tests, most of the testing needs were covered
by user land unit tests,
and higher level application testing.
To you Frank, and your concern about having to learn yet another tool with it's own set of syntax, I completely agree with you. We definitely would want
to minimize the need to
learn new ways, which is why I think it is important to see the whole complex of unit
testing together, and at least make sure it works in a unified and efficient way from a
syntax and operational way.
With KTF we focus on trying to make kernel testing as similar and integrated with user
space tests as possible, using similar test macros, and also to not reinvent more wheels than necessary by basing reporting and test execution on
existing user land tools.
KTF integrates with Googletest for this functionality. This also makes the reporting format discussion here irrelevant for KTF, as KTF supports whatever
reporting format the user land tool supports - Googletest for instance naturally supports pluggable reporting implementations, and there already seems
to be a TAP reporting extension out there (I haven't tried it yet though)
Using and relating to an existing user land framework allows us to have a set of
tests that works the same way from a user/developer perspective,
but some of them are kernel only tests, some are ordinary user land
tests, exercising system call boundaries and other kernel
interfaces, and some are what we call "hybrid", where parts of
the test run in user mode and parts in kernel mode.
I hope we can discuss this complex in more detail, for instance at the testing
and fuzzing workshop at LPC later this year, where I have proposed a topic for it.
Thanks,
Knut
Powered by blists - more mailing lists