[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ECADFF3FD767C149AD96A924E7EA6EAF977D00A4@USCULXMSG01.am.sony.com>
Date: Thu, 17 Oct 2019 23:40:01 +0000
From: <Tim.Bird@...y.com>
To: <tytso@....edu>
CC: <skhan@...uxfoundation.org>, <brendanhiggins@...gle.com>,
<yzaikin@...gle.com>, <linux-kselftest@...r.kernel.org>,
<linux-ext4@...r.kernel.org>, <adilger.kernel@...ger.ca>,
<kunit-dev@...glegroups.com>
Subject: RE: [PATCH linux-kselftest/test v2] ext4: add kunit test for
decoding extended timestamps
> -----Original Message-----
> From: Theodore Y. Ts'o
>
> On Thu, Oct 17, 2019 at 10:25:35PM +0000, Tim.Bird@...y.com wrote:
> >
> > I'm not sure I have the entire context here, but I think deterministic
> > might not be the right word, or it might not capture the exact meaning
> > intended.
> >
> > I think there are multiple issues here:
> > 1. Does the test enclose all its data, including working data and expected
> results?
> > Or, does the test allow someone to provide working data? This
> > alternative implies that either the some of testcases or the results
> > might be different depending on the data that is provided. IMHO the
> > test would be deterministic if it always produced the same results
> > based on the same data inputs. And if the input data was
> > deterministic. I would call this a data-driven test.
>
> Do you mean that the tester would provide the test data at the time
> when they launch the test?
No. Well, the data might be provided at some time independent
of the test compilation time, but it would not be made up on the fly.
So the data might be provided at run time, but that shouldn't imply
that the data is random, or that there is some lengthy fabrication
process that happens at test execution time.
> My definition of a unit test is something which runs quickly so you
> can do make some quick edits, and then run something like "make
> check", and find out whether or not you've made any screw ups. Or it
> might get used in an automated way, such that immediately after I push
> to a Gerrit server, tests *automatically* get run, and within a minute
> or two, I get notified if there are any test failures, and a -1 review
> is automatically posted to the proposed change in the Web UI.
>
> So this is not the sort of thing where the human will be providing
> working data to change how the test behaves. The idea is that you
> type "make check", and hundreds of tests run, sanity checking basic
> functional soundness of the changes which were just made.
I'm not talking about any kind of system where a human is required
to think something up when the test runs.
>
> Unit tests also tend to run on small pieces of code --- for example,
> in how we encode 64-bit timestamps in ext4, to provide both 100ns
> granularity timestamps as well as post-2038 encoding, where the "low"
> 32-bits are backwards compatible with traditional 32-bit time_t's. It
> turns doing this is complicated, and it's easy to screw it up (and in
> fact, we have). So now we have a set of unit tests for it.
>
> The unit tests use a lot of reusable infrastructure, so that while
> there are a series of tests to test each of the various different
> boundary conditions, they are *driven* by test data. So for example,
> this is a test...
>
> {
> .test_case_name =
> "2174-02-25 Lower bound of 32bit <0 timestamp, hi extra sec bit
> on.",
> .msb_set = true,
> .lower_bound = true,
> .extra_bits = 2,
> .expected = {.tv_sec = 0x180000000LL, .tv_nsec = 0L},
> },
>
>
> ... and this is a different test....
>
> {
> .test_case_name =
> "2310-04-04 Upper bound of 32bit>=0 timestamp, hi extra sec bit 1. 1 ns bit
> 1.",
> .msb_set = false,
> .lower_bound = false,
> .extra_bits = 6,
> .expected = {.tv_sec = 0x27fffffffLL, .tv_nsec = 1L},
> },
>
> So yes, it's very data-driven. *But* all of the code, data, and
> expected results are all encapsulated in fs/ext4/inode-test.c.
Agreed. By data-driven, I should have more specifically said
something like external-data driven.
>
> So when you say, "does the test allow someone to provide working
> data", it's unclear what you mean by that.
I mean that some piece of data that allows the test to test more conditions.
More specifically, I mean data that already exists at test execution
time.
> How do you expect a user
> to provide the working data? And how do you make the test easy to
> run?
In this case, the data is encoded in C structures and requires re-compiling
the test when a new data point is added. One could envision reading
the list of data points from a file, and processing them.
Each new data point becomes a new test case. This allows someone
to add a data point - for example maybe Iurii missed an interesting one,
or the algorithm changed and a new test case becomes worth testing.
In this case, the cost of parsing the data file does add some overhead,
but it's not onerous. I'm not sure how, or whether, kunit handles
the issue of reading data from a file at test time. But it doesn't have
to be a file read. I'm just talking separating data from code.
This might not be the best example, because the C code for expressing
the data points is pretty simple, so someone can add another data
point pretty easily. What cannot be done with the data points embedded
in the code, however, is send a data file indicating a new problem data point,
without other people having to re-compile the code to check it out
on their system.
>
> > 2. Does the test automatically detect some attribute of the system,
> > and adjust its operation based on that (does the test probe?) This
> > is actually quite common if you include things like when a test
> > requires root access to run. Sometimes such tests, when run without
> > root privilege, run as many testcases as possible not as root, and
> > skip the testcases that require root.
>
> And if the test needs to do this, it's *automatically* not a unit
> test.
Not necessarily. Maybe the root privilege example is not a good one.
How about a test that probes the kernel config, and executes
some variation of the tests based on the config values it detects?
This has nothing to do with the scope of the test or the speed of
the test.
> It might be an integration test. xfstests fall very squarely
> into into that category. Integration tests are heavier-weight.
> You're not going to run them as lightly as you run unit tests, because
> they take more resources, and more wall clock time before you get
> results.
>
> One of the main point of unit tests is that they are testing units of
> code smaller than "the whole kernel". When I'm testing how 64-bit
> timestamps are encoding, "root" has no meaning, because the scope of
> the test doesn't even really include the concept of a process, or user
> privileges.
Agreed.
>
> One of the other advantages of unit tests is that sometimes it's much
> easier to test the corner cases of some internal abstraction because
> you can have test mocks which are can simulate conditions which would
> be very hard to simulate if you are running userspace code which can
> talk to the kernel only via the system call interface.
Agreed.
>
> > The reason I want get clarity on the issue of data-driven tests is
> > that I think data-driven tests and tests that probe are very much
> > desirable. This allows a test to be able to be more generalized and
> > allows for specialization of the test for more scenarios without
> > re-coding it. I'm not sure if this still qualifies as unit testing,
> > but it's very useful as a means to extend the value of a test. We
> > haven't trod into the mocking parts of kunit, but I'm hoping that it
> > may be possible to have that be data-driven (depending on what's
> > being mocked), to make it easier to test more things using the same
> > code.
>
> So does this distinction help?
>
> If you need to do things which involve system calls and whether the
> system calls are run as root, it seems to me that the existing
> kselftest infrastructure is the right thing to do. If you are
> interested in testing code below (and in some cases *well* below) the
> system call interface, then kunit is the right approach.
>
> I'd encourage you to take a look at Iiuri's fs/ext4/inode-test.c. As
> you can see, it is very data-driven. But it is also, at the same
> time, very self-contained, and which can be easily run, and run very
> quickly.
I'm familiar with the test. I provided input to it. In using it as an example
I am not advocating that it is a good candidate for being data driven (with
data external to the test program).
But other tests - particular ones where hardware needs to be mocked,
are.
-- Tim
Powered by blists - more mailing lists