lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFd5g45fSdpytudDyD3Yo1ti=kU_JJ6S9yz53_L=pnZTjQFU9A@mail.gmail.com>
Date:   Tue, 25 Jun 2019 23:41:47 -0700
From:   Brendan Higgins <brendanhiggins@...gle.com>
To:     Luis Chamberlain <mcgrof@...nel.org>
Cc:     Stephen Boyd <sboyd@...nel.org>,
        Frank Rowand <frowand.list@...il.com>,
        Greg KH <gregkh@...uxfoundation.org>,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        Kees Cook <keescook@...gle.com>,
        Kieran Bingham <kieran.bingham@...asonboard.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Rob Herring <robh@...nel.org>, shuah <shuah@...nel.org>,
        "Theodore Ts'o" <tytso@....edu>,
        Masahiro Yamada <yamada.masahiro@...ionext.com>,
        devicetree <devicetree@...r.kernel.org>,
        dri-devel <dri-devel@...ts.freedesktop.org>,
        kunit-dev@...glegroups.com,
        "open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
        linux-fsdevel@...r.kernel.org,
        linux-kbuild <linux-kbuild@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "open list:KERNEL SELFTEST FRAMEWORK" 
        <linux-kselftest@...r.kernel.org>,
        linux-nvdimm <linux-nvdimm@...ts.01.org>,
        linux-um@...ts.infradead.org,
        Sasha Levin <Alexander.Levin@...rosoft.com>,
        "Bird, Timothy" <Tim.Bird@...y.com>,
        Amir Goldstein <amir73il@...il.com>,
        Dan Carpenter <dan.carpenter@...cle.com>,
        Daniel Vetter <daniel@...ll.ch>, Jeff Dike <jdike@...toit.com>,
        Joel Stanley <joel@....id.au>,
        Julia Lawall <julia.lawall@...6.fr>,
        Kevin Hilman <khilman@...libre.com>,
        Knut Omang <knut.omang@...cle.com>,
        Logan Gunthorpe <logang@...tatee.com>,
        Michael Ellerman <mpe@...erman.id.au>,
        Petr Mladek <pmladek@...e.com>,
        Randy Dunlap <rdunlap@...radead.org>,
        Richard Weinberger <richard@....at>,
        David Rientjes <rientjes@...gle.com>,
        Steven Rostedt <rostedt@...dmis.org>, wfg@...ux.intel.com
Subject: Re: [PATCH v5 01/18] kunit: test: add KUnit test runner core

On Tue, Jun 25, 2019 at 4:02 PM Luis Chamberlain <mcgrof@...nel.org> wrote:
>
> On Tue, Jun 25, 2019 at 03:14:45PM -0700, Brendan Higgins wrote:
> > On Tue, Jun 25, 2019 at 2:44 PM Luis Chamberlain <mcgrof@...nel.org> wrote:
> > > Since its a new architecture and since you seem to imply most tests
> > > don't require locking or even IRQs disabled, I think its worth to
> > > consider the impact of adding such extreme locking requirements for
> > > an initial ramp up.
> >
> > Fair enough, I can see the point of not wanting to use irq disabled
> > until we get someone complaining about it, but I think making it
> > thread safe is reasonable. It means there is one less thing to confuse
> > a KUnit user and the only penalty paid is some very minor performance.
>
> One reason I'm really excited about kunit is speed... so by all means I
> think we're at a good point to analyze performance optimizationsm if
> they do make sense.

Yeah, but I think there are much lower hanging fruit than this (as you
point out below). I am all for making/keeping KUnit super fast, but I
also don't want to waste time with premature optimizations and I think
having thread safe expectations and non-thread safe expectations hurts
usability.

Still, I am on board with making this a mutex instead of a spinlock for now.

> While on the topic of parallization, what about support for running
> different test cases in parallel? Or at the very least different kunit
> modules in parallel.  Few questions come up based on this prospect:
>
>   * Why not support parallelism from the start?

Just because it is more work and there isn't much to gain from it right now.

Some numbers:
I currently have a collection of 86 test cases in the branch that this
patchset is from. I turned on PRINTK_TIME and looked at the first
KUnit output and the last. On UML, start time was 0.090000, and end
time was 0.090000. Looks like sched_clock is not very good on UML.

Still it seems quite likely that all of these tests run around 0.01
seconds or less on UML: I ran KUnit with only 2 test cases enabled
three times and got an average runtime of 1.55867 seconds with a
standard deviation of 0.0346747. I then ran it another three times
with all test cases enabled and got an average runtime of 1.535
seconds with a standard deviation of 0.0150997. The second average is
less, but that doesn't really mean anything because it is well within
one standard deviation with a very small sample size. Nevertheless, we
can conclude that the actual runtime of those 84 test cases is most
likely within one standard deviation, so on the order of 0.01 seconds.

On x86 running on QEMU, first message from KUnit was printed at
0.194251 and the last KUnit message was printed at 0.340915, meaning
that all 86 test cases ran in about 0.146664 seconds.

In any case, running KUnit tests in parallel is definitely something I
plan on adding it eventually, but it just doesn't really seem worth it
right now. I find the incremental build time of the kernel to
typically be between 3 and 30 seconds, and a clean build to be between
30 seconds to several minutes, depending on the number of available
cores, so I don't think most users would even notice the amount of
runtime contributed by the actual unit tests until we start getting
into the 1000s of test cases. I don't suspect it will become an issue
until we get into the 10,000s of test cases. I think we are a pretty
long way off from that.

>   * Are you opposed to eventually having this added? For instance, there is
>     enough code on lib/test_kmod.c for batching tons of kthreads each
>     one running its own thing for testing purposes which could be used
>     as template.

I am not opposed to adding it eventually at all. I actually plan on
doing so, just not in this patchset. There are a lot of additional
features, improvements, and sugar that I really want to add, so much
so that most of it doesn't belong in this patchset; I just think this
is one of those things that belongs in a follow up. I tried to boil
down this patchset to as small as I could while still being useful;
this is basically an MVP. Maybe after this patchset gets merged I
should post a list of things I have ready for review, or would like to
work on, and people can comment on what things they want to see next.

>   * If we eventually *did* support it:
>     - Would logs be skewed?

Probably, before I went with the TAP approach, I was tagging each
message with the test case it came from and I could have parsed it and
assembled a coherent view of the logs using that; now that I am using
TAP conforming output, that won't work. I haven't really thought too
hard about how to address it, but there are ways. For the UML users, I
am planning on adding a feature to guarantee hermeticity between tests
running in different modules by adding a feature that allows a single
kernel to be built with all tests included, and then determine which
tests get run by passing in command line arguments or something. This
way you can get the isolation from running tests in separate
environments without increasing the build cost. We could also use this
method to achieve parallelism by dispatching multiple kernels at once.
That only works for UML, but I imagine you could do something similar
for users running tests under qemu.

>     - Could we have a way to query: give me log for only kunit module
>       named "foo"?

Yeah, I think that would make sense as part of the hermeticity thing I
mentioned above.

Hope that seems reasonable!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ