lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 17 Oct 2019 21:40:27 -0400
From:   "Theodore Y. Ts'o" <tytso@....edu>
To:     Tim.Bird@...y.com
Cc:     skhan@...uxfoundation.org, brendanhiggins@...gle.com,
        yzaikin@...gle.com, linux-kselftest@...r.kernel.org,
        linux-ext4@...r.kernel.org, adilger.kernel@...ger.ca,
        kunit-dev@...glegroups.com
Subject: Re: [PATCH linux-kselftest/test v2] ext4: add kunit test for
 decoding extended timestamps

On Thu, Oct 17, 2019 at 11:40:01PM +0000, Tim.Bird@...y.com wrote:
> 
> No. Well, the data might be provided at some time independent
> of the test compilation time, but it would not be made up on the fly.
> So the data might be provided at run time, but that shouldn't imply
> that the data is random, or that there is some lengthy fabrication
> process that happens at test execution time.

So how would the data be provided?  Via a mounted file system?  There
is no mounted file system when we're running a kunit test.  One of the
reasons why kunit is fast is because we're not running init scripts,
and we're not mounting a file system.

The fabrication process isn't really lengthy, though.  If I modify
fs/ext4/inode-test.c to add or remove a test, it takes:

Elapsed time: 2.672s total, 0.001s configuring, 2.554s building, 0.116s running

Compare and contrast this with running "kvm-xfstests -c 4k generic/001"

The actual time to run the test generic/001 is 3 seconds.  But there
is a 9 second overhead in starting the VM, for a total test time of 12
seconds.  So sure, with kvm-xfstests I can drop a config file in
/tmp/kvm-xfstests-tytso, which is mounted as /vtmp using 9p, so you
could provide "user provided data" via a text file.  But the overhead
of starting up a full KVM, mounting a file system, starting userspace,
etc., is 9 seconds.

Contrast this with 2.5 seconds to recompile and relink
fs/ext4/inode-test.c into the kunit library.  I wouldn't call that a
"length fabrication process".  Is it really worth it to add in some
super-complex way to feed a data text file into a Kunit test, when
editing the test file and rerunning the test really doesn't take that
long?

> In this case, the cost of parsing the data file does add some overhead,
> but it's not onerous.  I'm not sure how, or whether, kunit handles
> the issue of reading data from a file at test time.  But it doesn't have
> to be a file read.  I'm just talking separating data from code.

It's not the cost of parsing the data file is how to *feed* the data
file into the test file.  How exactly are we supposed to do it?  9p?
Some other mounted file system?  That's where all the complexity and
overhead is going to be.

> Not necessarily.  Maybe the root privilege example is not a good one.
> How about a test that probes the kernel config, and executes
> some variation of the tests based on the config values it detects?

But that's even easier.  We can put "#ifdef CONFIG_xxx" into the
fs/ext4/inode-test.c file.  Again, it doesn't take that long to
recompile and relink the test .c file.

Apologies, but this really seems like complexity in search of a
problem....

						- Ted

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ