lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6d70b168-d05b-5330-89c8-148b334f9fce@gmail.com>
Date:   Tue, 23 Jun 2020 18:55:20 -0500
From:   Frank Rowand <frowand.list@...il.com>
To:     David Gow <davidgow@...gle.com>, Dmitry Vyukov <dvyukov@...gle.com>
Cc:     "Bird, Tim" <Tim.Bird@...y.com>,
        "shuah@...nel.org" <shuah@...nel.org>,
        "linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
        Brendan Higgins <brendanhiggins@...gle.com>,
        Kees Cook <keescook@...omium.org>,
        Paolo Bonzini <pbonzini@...hat.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: RFC: KTAP documentation - expected messages

+Dmitry, since Brendan added him to another reply at this thread level

On 2020-06-22 21:46, David Gow wrote:
> On Mon, Jun 22, 2020 at 6:45 AM Frank Rowand <frowand.list@...il.com> wrote:
>>
>> Tim Bird started a thread [1] proposing that he document the selftest result
>> format used by Linux kernel tests.
>>
>> [1] https://lore.kernel.org/r/CY4PR13MB1175B804E31E502221BC8163FD830@CY4PR13MB1175.namprd13.prod.outlook.com
>>
>> The issue of messages generated by the kernel being tested (that are not
>> messages directly created by the tests, but are instead triggered as a
>> side effect of the test) came up.  In this thread, I will call these
>> messages "expected messages".  Instead of sidetracking that thread with
>> a proposal to handle expected messages, I am starting this new thread.
> 
> Thanks for doing this: I think there are quite a few tests which could
> benefit from something like this.
> 
> I think there were actually two separate questions: what do we do with
> unexpected messages (most of which I expect are useless, but some of
> which may end up being related to an unexpected test failure), and how
> to have tests "expect" a particular message to appear. I'll stick to

Yes.  But there is also a third aspect that made this feature important
for the Devicetree unittest.  There was a question on the devicetree
mail list, asking whether some devicetree related kernel warning and/or
error messages were devicetree problems.  The messages appeared while
the unittest was executing, but at the same time a lot of system
initialization is in progress, resulting in lots of console messages
that are unrelated to unittest.  I could not in good conscious reply
that the messages were truly of no consequence without actually
chasing each of them down and verifying that they were triggered by
unittest, and were showing what the devicetree infrastructure should
be reporting in response to the test stimulus, vs an underlying bug
in the devicetree infrastructure.

I found the expected messages API to be a useful tool to document
the validity of the messages, both for myself, and for the random
developer who might be reading the boot messages.

> talking about the latter for this thread, but even there there's two
> possible interpretations of "expected messages" we probably want to
> explicitly distinguish between: a message which must be present for
> the test to pass (which I think best fits the "expected message"
> name), and a message which the test is likely to produce, but which
> shouldn't alter the result (an "ignored message").

This type of case was the impetus for me to create the API.  There
was a unittest that resulted in the probe of a device, where the
probe executed devicetree subsystem code that invoked a
blocking_notifier_call_chain(), that resulted in another subsystem
taking some action, and that action just happened to do a printk()
reporting a specific action that the unittest was trying to
verify.

I was able to verify much of the asynchronous activity by creating
a fake driver and corresponding devices to be probed and could
instrument the fake driver.  The printk() information provided
the last little bit of checking for correct behavior.

> I don't see much
> use for the latter at present, but if we wanted to do more things with
> messages and had some otherwise very verbose tests, it could
> potentially be useful.

The use for the "ignored message" is my third aspect above.  This points
out that yet another possible consumer of the console boot log is the
QA or test engineer.  They can have the same concerns as any "random
developer".

> 
> The other thing I'd note here is that this proposal seems to be doing
> all of the actual message filtering in userspace, which makes a lot of
> sense for kselftest tests, but does mean that the kernel can't know if
> the test has passed or failed.

Yes.  I had absolutely no interest in my test code examining the history
of console messages, which could be generated on any other processor.

The printk related code has always been complex, nuanced, and seems to
attract the attention of people who want to change it instead of
leaving it stable.  I would really like to stay away from any dependency
on it.

> There's definitely a tradeoff between
> trying to put too much needless string parsing in the kernel and
> having to have a userland tool determine the test results. The
> proposed KCSAN test suite[1] is using tracepoints to do this in the
> kernel. It's not the cleanest thing, but there's no reason KUnit or
> similar couldn't implement a nicer API around it.

My interest is in printk() based messages that are present in areas
outside of my test code and independent of my test code.  I specifically
did not want to modify that existing code with any test code.

I was willing to accept the extra layer of running a user space program
to process the console output to verify one small portion of the
test passing or failing (or alternately, just examining the console
output manually).

-Frank

> 
> [1]: https://lkml.org/lkml/2020/6/22/1506
> 
>> I implemented an API for expected messages that are triggered by tests
>> in the Devicetree unittest code, with the expectation that the specific
>> details may change when the Devicetree unittest code adapts the KUnit
>> API.  It seems appropriate to incorporate the concept of expected
>> messages in Tim's documentation instead of waiting to address the
>> subject when the Devicetree unittest code adapts the KUnit API, since
>> Tim's document may become the kernel selftest standard.
> 
> Is having a nice way to handle expected messages the only thing
> holding up porting this to KUnit?
> 
>> Instead of creating a very long email containing multiple objects,
>> I will reply to this email with a separate reply for each of:
>>
>>   The "expected messages" API implemention and use can be from
>>   drivers/of/unittest.c in the mainline kernel.
>>
>>   of_unittest_expect - A proof of concept perl program to filter console
>>                        output containing expected messages output
>>
>>                        of_unittest_expect is also available by cloning
>>                        https://github.com/frowand/dt_tools.git
>>
>>   An example raw console output with timestamps and expect messages.
>>
>>   An example of console output processed by filter program
>>   of_unittest_expect to be more human readable.  The expected
>>   messages are not removed, but are flagged.
>>
>>   An example of console output processed by filter program
>>   of_unittest_expect to be more human readable.  The expected
>>   messages are removed instead of being flagged.
> 
> Cheers,
> -- David
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ