[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190213192512.GH69686@sasha-vm>
Date: Wed, 13 Feb 2019 14:25:12 -0500
From: Sasha Levin <sashal@...nel.org>
To: Greg KH <gregkh@...uxfoundation.org>
Cc: Amir Goldstein <amir73il@...il.com>,
Steve French <smfrench@...il.com>,
lsf-pc@...ts.linux-foundation.org,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
"Luis R. Rodriguez" <mcgrof@...nel.org>
Subject: Re: [LSF/MM TOPIC] FS, MM, and stable trees
On Wed, Feb 13, 2019 at 10:18:03AM +0100, Greg KH wrote:
>On Wed, Feb 13, 2019 at 11:01:25AM +0200, Amir Goldstein wrote:
>> Best effort testing in timely manner is good, but a good way to
>> improve confidence in stable kernel releases is a publicly
>> available list of tests that the release went through.
>
>We have that, you aren't noticing them...
This is one of the biggest things I want to address: there is a
disconnect between the stable kernel testing story and the tests the fs/
and mm/ folks expect to see here.
On one had, the stable kernel folks see these kernels go through entire
suites of testing by multiple individuals and organizations, receiving
way more coverage than any of Linus's releases.
On the other hand, things like LTP and selftests tend to barely scratch
the surface of our mm/ and fs/ code, and the maintainers of these
subsystems do not see LTP-like suites as something that adds significant
value and ignore them. Instead, they have a (convoluted) set of testing
they do with different tools and configurations that qualifies their
code as being "tested".
So really, it sounds like a low hanging fruit: we don't really need to
write much more testing code code nor do we have to refactor existing
test suites. We just need to make sure the right tests are running on
stable kernels. I really want to clarify what each subsystem sees as
"sufficient" (and have that documented somewhere).
--
Thanks,
Sasha
Powered by blists - more mailing lists