[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202006162032.9BF6F8F4E@keescook>
Date: Tue, 16 Jun 2020 20:36:06 -0700
From: Kees Cook <keescook@...omium.org>
To: "Bird, Tim" <Tim.Bird@...y.com>
Cc: Brendan Higgins <brendanhiggins@...gle.com>,
"shuah@...nel.org" <shuah@...nel.org>,
"linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
David Gow <davidgow@...gle.com>
Subject: Re: RFC - kernel selftest result documentation (KTAP)
On Wed, Jun 17, 2020 at 02:30:45AM +0000, Bird, Tim wrote:
> Agreed. You only need machine-parsable data if you expect the CI
> system to do something more with the data than just present it.
> What that would be, that would be common for all tests (or at least
> many test), is unclear. Maybe there are patterns in the diagnostic
> data that could lead to higher-level analysis, or even automated
> fixes, that don't become apparent if the data is unstructured. But
> it's hard to know until you have lots of data. I think just getting
> the other things consistent is a good priority right now.
Yeah. I think the main place for this is performance analysis, but I
think that's a separate system entirely. TAP is really strictly yes/no,
where as performance analysis a whole other thing. The only other thing
I can think of is some kind of feature analysis, but that would be built
out of the standard yes/no output. i.e. if I create a test that checks
for specific security mitigation features (*cough*LKDTM*cough*), having
a dashboard that shows features down one axis and architectures and/or
kernel versions on other axes, then I get a pretty picture. But it's
still being built out of the yes/no info.
*shrug*
I think diagnostic should be expressly non-machine-oriented.
--
Kees Cook
Powered by blists - more mailing lists