lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 27 Apr 2020 09:49:31 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Marco Elver <elver@...gle.com>
Cc:     kunit-dev@...glegroups.com,
        Brendan Higgins <brendanhiggins@...gle.com>,
        Dmitry Vyukov <dvyukov@...gle.com>,
        Alexander Potapenko <glider@...gle.com>,
        Andrey Konovalov <andreyknvl@...gle.com>,
        kasan-dev <kasan-dev@...glegroups.com>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] kcsan: Add test suite

On Mon, Apr 27, 2020 at 06:43:21PM +0200, Marco Elver wrote:
> On Mon, 27 Apr 2020 at 17:37, Paul E. McKenney <paulmck@...nel.org> wrote:
> >
> > On Mon, Apr 27, 2020 at 05:23:23PM +0200, Marco Elver wrote:
> > > On Mon, 27 Apr 2020 at 16:35, Marco Elver <elver@...gle.com> wrote:
> > > >
> > > > This adds KCSAN test focusing on behaviour of the integrated runtime.
> > > > Tests various race scenarios, and verifies the reports generated to
> > > > console. Makes use of KUnit for test organization, and the Torture
> > > > framework for test thread control.
> > > >
> > > > Signed-off-by: Marco Elver <elver@...gle.com>
> > > > ---
> > >
> > > +KUnit devs
> > > We had some discussions on how to best test sanitizer runtimes, and we
> > > believe that this test is what testing sanitizer runtimes should
> > > roughly look like. Note that, for KCSAN there are various additional
> > > complexities like multiple threads, and report generation isn't
> > > entirely deterministic (need to run some number of iterations to get
> > > reports, may get multiple reports, etc.).
> > >
> > > The main thing, however, is that we want to verify the actual output
> > > (or absence of it) to console. This is what the KCSAN test does using
> > > the 'console' tracepoint. Could KUnit provide some generic
> > > infrastructure to check console output, like is done in the test here?
> > > Right now I couldn't say what the most useful generalization of this
> > > would be (without it just being a wrapper around the console
> > > tracepoint), because the way I've decided to capture and then match
> > > console output is quite test-specific. For now we can replicate this
> > > logic on a per-test basis, but it would be extremely useful if there
> > > was a generic interface that KUnit could provide in future.
> > >
> > > Thoughts?
> >
> > What I do in rcutorture is to run in a VM, dump the console output
> > to a file, then parse that output after the run completes.  For example,
> > the admittedly crude script here:
> >
> >         tools/testing/selftests/rcutorture/bin/parse-console.sh
> 
> That was on the table at one point, but discarded. We debated when I
> started this if I should do module + script, or all as one module.
> Here is some of the reasoning we went through, just for the record:
> 
> We wanted to use KUnit, to be able to benefit from all the
> infrastructure it provides. Wanting to use KUnit meant that we cannot
> have a 2-step test (module + script), because KUnit immediately prints
> success/fail after each test-case and doesn't run any external scripts
> (AFAIK). There are several benefits to relying on KUnit, such as:
> 1. Common way to set up and run test cases. No need to roll our own.
> 2. KUnit has a standardized way to assert, report test status,
> success, etc., which can be parsed by CI systems
> (https://testanything.org).
> 3. There are plans to set up KUnit CI systems, that just load and run
> all existing KUnit tests on boot. The sanitizer tests can become part
> of these automated test runs.
> 4. If KUnit eventually has a way to check output to console, our
> sanitizer tests will be simplified even further.
> 
> The other argument is that doing module + script is probably more complex:
> 1. The test would have to explicitly delimit test cases in a custom
> way, which a script could then extract.
> 2. We need to print the function names, and sizes + addresses of the
> variables used in the races, to then be parsed by the script, and
> finally match the access information.
> 3. Re-running the test without shutting down the system would require
> clearing the kernel log or some other way to delimit tests.
> 
> We'd still need the same logic, one way or another, to check what was
> printed to console. In the end, I came to the conclusion that it's
> significantly simpler to just have everything integrated in the
> module:
> 1. No need to delimit test cases, and parse based on delimiters. Just
> check what the console tracepoint last captured.
> 2. Can just refer to the functions, and variables directly and no need
> to parse this.
> 3. Re-running the test works out of the box.
> 
> Therefore, the conclusion is that for the sanitizers this is hopefully
> the best approach.

Fair enough!

Perhaps I should look into KUnit.  I don't recommend holding your breath
waiting, though, inertia being what it is.  ;-)

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ