[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190725163946.xt2p3pvxwuabzojj@xps.therub.org>
Date: Thu, 25 Jul 2019 11:39:46 -0500
From: Dan Rue <dan.rue@...aro.org>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <sean.j.christopherson@...el.com>,
Naresh Kamboju <naresh.kamboju@...aro.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Anders Roxell <anders.roxell@...aro.org>,
Ben Hutchings <ben.hutchings@...ethink.co.uk>,
wanpengli@...cent.com,
Linus Torvalds <torvalds@...ux-foundation.org>,
patches@...nelci.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
lkft-triage@...ts.linaro.org,
linux- stable <stable@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Shuah Khan <shuah@...nel.org>,
Guenter Roeck <linux@...ck-us.net>, jmattson@...gle.com
Subject: Re: [PATCH 5.2 000/413] 5.2.3-stable review
On Thu, Jul 25, 2019 at 06:30:10PM +0200, Paolo Bonzini wrote:
> On 25/07/19 18:20, Sean Christopherson wrote:
> > On Thu, Jul 25, 2019 at 06:10:37PM +0200, Paolo Bonzini wrote:
> >> On 25/07/19 18:09, Sean Christopherson wrote:
> >>>> This investigation confirms it is a new test code failure on stable-rc 5.2.3
> >>> No, it only confirms that kvm-unit-tests/master fails on 5.2.*. To confirm
> >>> a new failure in 5.2.3 you would need to show a test that passes on 5.2.2
> >>> and fails on 5.2.3.
> >>
> >> I think he meant "a failure in new test code". :)
> >
> > Ah, that does appear to be the case. So just to be clear, we're good, right?
>
> Yes. I'm happy to gather ideas on how to avoid this (i.e. 1) if a
> submodule would be useful; 2) where to stick it).
Hi!
First, to be clear: from LKFT perspective there are no kernel
regressions here.
To your point Paolo - reporting 'fail' because of a missing kernel
feature is a generic problem we see across test suites, and causes tons
of pain and misery for CI people. As a general rule, I'd avoid
submodules, and even branches that track specific kernels. Rather, and I
don't know if it's possible in this case, but the best way to manage it
from both a test author and a test runner POV is to wrap the test in
kernel feature checks, kernel version checks, kernel config checks, etc.
Report 'skip' if the environment in which the test is running isn't
sufficient to run the test. Then, you only have to maintain one version
of the test suite, users can always use the latest, and critically: all
failures are actual failures.
Dan
>
> Paolo
--
Linaro - Kernel Validation
Powered by blists - more mailing lists