[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ECADFF3FD767C149AD96A924E7EA6EAF977D01DC@USCULXMSG01.am.sony.com>
Date: Fri, 18 Oct 2019 02:40:50 +0000
From: <Tim.Bird@...y.com>
To: <tytso@....edu>
CC: <skhan@...uxfoundation.org>, <brendanhiggins@...gle.com>,
<yzaikin@...gle.com>, <linux-kselftest@...r.kernel.org>,
<linux-ext4@...r.kernel.org>, <adilger.kernel@...ger.ca>,
<kunit-dev@...glegroups.com>
Subject: RE: [PATCH linux-kselftest/test v2] ext4: add kunit test for
decoding extended timestamps
> -----Original Message-----
> From: Theodore Y. Ts'o
>
> On Thu, Oct 17, 2019 at 11:40:01PM +0000, Tim.Bird@...y.com wrote:
> >
> > No. Well, the data might be provided at some time independent
> > of the test compilation time, but it would not be made up on the fly.
> > So the data might be provided at run time, but that shouldn't imply
> > that the data is random, or that there is some lengthy fabrication
> > process that happens at test execution time.
>
> So how would the data be provided? Via a mounted file system? There
> is no mounted file system when we're running a kunit test. One of the
> reasons why kunit is fast is because we're not running init scripts,
> and we're not mounting a file system.
>
> The fabrication process isn't really lengthy, though. If I modify
> fs/ext4/inode-test.c to add or remove a test, it takes:
>
> Elapsed time: 2.672s total, 0.001s configuring, 2.554s building, 0.116s running
>
> Compare and contrast this with running "kvm-xfstests -c 4k generic/001"
>
> The actual time to run the test generic/001 is 3 seconds. But there
> is a 9 second overhead in starting the VM, for a total test time of 12
> seconds. So sure, with kvm-xfstests I can drop a config file in
> /tmp/kvm-xfstests-tytso, which is mounted as /vtmp using 9p, so you
> could provide "user provided data" via a text file. But the overhead
> of starting up a full KVM, mounting a file system, starting userspace,
> etc., is 9 seconds.
>
> Contrast this with 2.5 seconds to recompile and relink
> fs/ext4/inode-test.c into the kunit library. I wouldn't call that a
> "length fabrication process".
I'm not sure I understand your point here at all. I never said that
compiling the code was a lengthy fabrication process. I said that
I was NOT envisioning a lengthy fabrication process at runtime
for the creation of the external data. Indeed, I wasn't envisioning fabricating
the data at test runtime at all. I was trying to clarify that I didn't envision
a human or fuzzer in the loop at test runtime, but apparently I didn't make that clear.
The clarification was based on an assumption I made about what you were asking
in your question. Maybe that assumption was bad.
> Is it really worth it to add in some
> super-complex way to feed a data text file into a Kunit test, when
> editing the test file and rerunning the test really doesn't take that
> long?
>
> > In this case, the cost of parsing the data file does add some overhead,
> > but it's not onerous. I'm not sure how, or whether, kunit handles
> > the issue of reading data from a file at test time. But it doesn't have
> > to be a file read. I'm just talking separating data from code.
>
> It's not the cost of parsing the data file is how to *feed* the data
> file into the test file. How exactly are we supposed to do it? 9p?
> Some other mounted file system? That's where all the complexity and
> overhead is going to be.
>
> > Not necessarily. Maybe the root privilege example is not a good one.
> > How about a test that probes the kernel config, and executes
> > some variation of the tests based on the config values it detects?
>
> But that's even easier. We can put "#ifdef CONFIG_xxx" into the
> fs/ext4/inode-test.c file. Again, it doesn't take that long to
> recompile and relink the test .c file.
>
> Apologies, but this really seems like complexity in search of a
> problem....
We're just talking past each other. My original e-mail was a rebuttal
to your assertion that any test that was data-driven or non-deterministic
was a fuzzer. I still believe that's just not the case. This is independent
of the mechanics or speed of how the data is input.
I also conceded (multiple times) that externally data-driven techniques are probably
more aptly applied to non-unit tests. I've heard your pitch about speed, and I'm sympathetic.
My point is that I believe there is a place for data-driven tests.
I can live with having failed to convince you, which I'll put down to my inadequacy to
accurately communicate my ideas. :-)
Regards,
-- Tim
Powered by blists - more mailing lists