lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190626034100.B238520883@mail.kernel.org>
Date:   Tue, 25 Jun 2019 20:40:59 -0700
From:   Stephen Boyd <sboyd@...nel.org>
To:     Brendan Higgins <brendanhiggins@...gle.com>
Cc:     Frank Rowand <frowand.list@...il.com>,
        Greg KH <gregkh@...uxfoundation.org>,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        Kees Cook <keescook@...gle.com>,
        Kieran Bingham <kieran.bingham@...asonboard.com>,
        Luis Chamberlain <mcgrof@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Rob Herring <robh@...nel.org>, shuah <shuah@...nel.org>,
        Theodore Ts'o <tytso@....edu>,
        Masahiro Yamada <yamada.masahiro@...ionext.com>,
        devicetree <devicetree@...r.kernel.org>,
        dri-devel <dri-devel@...ts.freedesktop.org>,
        kunit-dev@...glegroups.com,
        "open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
        linux-fsdevel@...r.kernel.org,
        linux-kbuild <linux-kbuild@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "open list:KERNEL SELFTEST FRAMEWORK" 
        <linux-kselftest@...r.kernel.org>,
        linux-nvdimm <linux-nvdimm@...ts.01.org>,
        linux-um@...ts.infradead.org,
        Sasha Levin <Alexander.Levin@...rosoft.com>,
        "Bird, Timothy" <Tim.Bird@...y.com>,
        Amir Goldstein <amir73il@...il.com>,
        Dan Carpenter <dan.carpenter@...cle.com>,
        Daniel Vetter <daniel@...ll.ch>, Jeff Dike <jdike@...toit.com>,
        Joel Stanley <joel@....id.au>,
        Julia Lawall <julia.lawall@...6.fr>,
        Kevin Hilman <khilman@...libre.com>,
        Knut Omang <knut.omang@...cle.com>,
        Logan Gunthorpe <logang@...tatee.com>,
        Michael Ellerman <mpe@...erman.id.au>,
        Petr Mladek <pmladek@...e.com>,
        Randy Dunlap <rdunlap@...radead.org>,
        Richard Weinberger <richard@....at>,
        David Rientjes <rientjes@...gle.com>,
        Steven Rostedt <rostedt@...dmis.org>, wfg@...ux.intel.com
Subject: Re: [PATCH v5 01/18] kunit: test: add KUnit test runner core

Quoting Brendan Higgins (2019-06-25 13:28:25)
> On Wed, Jun 19, 2019 at 5:15 PM Stephen Boyd <sboyd@...nel.org> wrote:
> >
> > Quoting Brendan Higgins (2019-06-17 01:25:56)
> > > diff --git a/kunit/test.c b/kunit/test.c
> > > new file mode 100644
> > > index 0000000000000..d05d254f1521f
> > > --- /dev/null
> > > +++ b/kunit/test.c
> > > @@ -0,0 +1,210 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/*
> > > + * Base unit test (KUnit) API.
> > > + *
> > > + * Copyright (C) 2019, Google LLC.
> > > + * Author: Brendan Higgins <brendanhiggins@...gle.com>
> > > + */
> > > +
> > > +#include <linux/sched/debug.h>
> > > +#include <kunit/test.h>
> > > +
> > > +static bool kunit_get_success(struct kunit *test)
> > > +{
> > > +       unsigned long flags;
> > > +       bool success;
> > > +
> > > +       spin_lock_irqsave(&test->lock, flags);
> > > +       success = test->success;
> > > +       spin_unlock_irqrestore(&test->lock, flags);
> >
> > I still don't understand the locking scheme in this code. Is the
> > intention to make getter and setter APIs that are "safe" by adding in a
> > spinlock that is held around getting and setting various members in the
> > kunit structure?
> 
> Yes, your understanding is correct. It is possible for a user to write
> a test such that certain elements may be updated in different threads;
> this would most likely happen in the case where someone wants to make
> an assertion or an expectation in a thread created by a piece of code
> under test. Although this should generally be avoided, it is possible,
> and there are occasionally good reasons to do so, so it is
> functionality that we should support.
> 
> Do you think I should add a comment to this effect?

No, I think the locking should be removed.

> 
> > In what situation is there more than one thread reading or writing the
> > kunit struct? Isn't it only a single process that is going to be
> 
> As I said above, it is possible that the code under test may spawn a
> new thread that may make an expectation or an assertion. It is not a
> super common use case, but it is possible.

Sure, sounds super possible and OK.

> 
> > operating on this structure? And why do we need to disable irqs? Are we
> > expecting to be modifying the unit tests from irq contexts?
> 
> There are instances where someone may want to test a driver which has
> an interrupt handler in it. I actually have (not the greatest) example
> here. Now in these cases, I expect someone to use a mock irqchip or
> some other fake mechanism to trigger the interrupt handler and not
> actual hardware; technically speaking in this case, it is not going to
> be accessed from a "real" irq context; however, the code under test
> should think that it is in an irq context; given that, I figured it is
> best to just treat it as a real irq context. Does that make sense?

Can you please describe the scenario in which grabbing the lock here,
updating a single variable, and then releasing the lock right after
does anything useful vs. not having the lock? I'm looking for a two CPU
scenario like below, but where it is a problem. There could be three
CPUs, or even one CPU and three threads if you want to describe the
extra thread scenario.

Here's my scenario where it isn't needed:

    CPU0                                      CPU1
    ----                                      ----
    kunit_run_test(&test)
                                              test_case_func()
					        ....
                                              [mock hardirq]
					        kunit_set_success(&test)
					      [hardirq ends]
                                                ...
                                                complete(&test_done)
      wait_for_completion(&test_done)
      kunit_get_success(&test)

We don't need to care about having locking here because success or
failure only happens in one place and it's synchronized with the
completion.

> 
> > > +
> > > +       return success;
> > > +}
> > > +
> > > +static void kunit_set_success(struct kunit *test, bool success)
> > > +{
> > > +       unsigned long flags;
> > > +
> > > +       spin_lock_irqsave(&test->lock, flags);
> > > +       test->success = success;
> > > +       spin_unlock_irqrestore(&test->lock, flags);
> > > +}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ