[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190627181636.5EA752064A@mail.kernel.org>
Date: Thu, 27 Jun 2019 11:16:35 -0700
From: Stephen Boyd <sboyd@...nel.org>
To: Brendan Higgins <brendanhiggins@...gle.com>
Cc: Frank Rowand <frowand.list@...il.com>,
Greg KH <gregkh@...uxfoundation.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Kees Cook <keescook@...gle.com>,
Kieran Bingham <kieran.bingham@...asonboard.com>,
Luis Chamberlain <mcgrof@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Rob Herring <robh@...nel.org>, shuah <shuah@...nel.org>,
Theodore Ts'o <tytso@....edu>,
Masahiro Yamada <yamada.masahiro@...ionext.com>,
devicetree <devicetree@...r.kernel.org>,
dri-devel <dri-devel@...ts.freedesktop.org>,
kunit-dev@...glegroups.com,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
linux-fsdevel@...r.kernel.org,
linux-kbuild <linux-kbuild@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"open list:KERNEL SELFTEST FRAMEWORK"
<linux-kselftest@...r.kernel.org>,
linux-nvdimm <linux-nvdimm@...ts.01.org>,
linux-um@...ts.infradead.org,
Sasha Levin <Alexander.Levin@...rosoft.com>,
"Bird, Timothy" <Tim.Bird@...y.com>,
Amir Goldstein <amir73il@...il.com>,
Dan Carpenter <dan.carpenter@...cle.com>,
Daniel Vetter <daniel@...ll.ch>, Jeff Dike <jdike@...toit.com>,
Joel Stanley <joel@....id.au>,
Julia Lawall <julia.lawall@...6.fr>,
Kevin Hilman <khilman@...libre.com>,
Knut Omang <knut.omang@...cle.com>,
Logan Gunthorpe <logang@...tatee.com>,
Michael Ellerman <mpe@...erman.id.au>,
Petr Mladek <pmladek@...e.com>,
Randy Dunlap <rdunlap@...radead.org>,
Richard Weinberger <richard@....at>,
David Rientjes <rientjes@...gle.com>,
Steven Rostedt <rostedt@...dmis.org>, wfg@...ux.intel.com
Subject: Re: [PATCH v5 01/18] kunit: test: add KUnit test runner core
Quoting Brendan Higgins (2019-06-26 16:00:40)
> On Tue, Jun 25, 2019 at 8:41 PM Stephen Boyd <sboyd@...nel.org> wrote:
>
> > scenario like below, but where it is a problem. There could be three
> > CPUs, or even one CPU and three threads if you want to describe the
> > extra thread scenario.
> >
> > Here's my scenario where it isn't needed:
> >
> > CPU0 CPU1
> > ---- ----
> > kunit_run_test(&test)
> > test_case_func()
> > ....
> > [mock hardirq]
> > kunit_set_success(&test)
> > [hardirq ends]
> > ...
> > complete(&test_done)
> > wait_for_completion(&test_done)
> > kunit_get_success(&test)
> >
> > We don't need to care about having locking here because success or
> > failure only happens in one place and it's synchronized with the
> > completion.
>
> Here is the scenario I am concerned about:
>
> CPU0 CPU1 CPU2
> ---- ---- ----
> kunit_run_test(&test)
> test_case_func()
> ....
> schedule_work(foo_func)
> [mock hardirq] foo_func()
> ... ...
> kunit_set_success(false) kunit_set_success(false)
> [hardirq ends] ...
> ...
> complete(&test_done)
> wait_for_completion(...)
> kunit_get_success(&test)
>
> In my scenario, since both CPU1 and CPU2 update the success status of
> the test simultaneously, even though they are setting it to the same
> value. If my understanding is correct, this could result in a
> write-tear on some architectures in some circumstances. I suppose we
> could just make it an atomic boolean, but I figured locking is also
> fine, and generally preferred.
This is what we have WRITE_ONCE() and READ_ONCE() for. Maybe you could
just use that in the getter and setters and remove the lock if it isn't
used for anything else.
It may also be a good idea to have a kunit_fail_test() API that fails
the test passed in with a WRITE_ONCE(false). Otherwise, the test is
assumed successful and it isn't even possible for a test to change the
state from failure to success due to a logical error because the API
isn't available. Then we don't really need to have a generic
kunit_set_success() function at all. We could have a kunit_test_failed()
function too that replaces the kunit_get_success() function. That would
read better in an if condition.
>
> Also, to be clear, I am onboard with dropping then IRQ stuff for now.
> I am fine moving to a mutex for the time being.
>
Ok.
Powered by blists - more mailing lists