[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260120092252.8597a496ed1cdebe5e120fb6@linux-foundation.org>
Date: Tue, 20 Jan 2026 09:22:52 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Mark Brown <broonie@...nel.org>
Cc: David Hildenbrand <david@...nel.org>, Lorenzo Stoakes
<lorenzo.stoakes@...cle.com>, "Liam R. Howlett" <Liam.Howlett@...cle.com>,
Vlastimil Babka <vbabka@...e.cz>, Mike Rapoport <rppt@...nel.org>, Suren
Baghdasaryan <surenb@...gle.com>, Michal Hocko <mhocko@...e.com>, Shuah
Khan <shuah@...nel.org>, Jason Gunthorpe <jgg@...pe.ca>, Leon Romanovsky
<leon@...nel.org>, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-kselftest@...r.kernel.org
Subject: Re: [PATCH] selftests/mm: Have the harness run each test category
separately
On Tue, 20 Jan 2026 13:25:32 +0000 Mark Brown <broonie@...nel.org> wrote:
> At present the mm selftests are integrated into the kselftest harness by
> having it run run_vmtest.sh and letting it pick it's default set of
> tests to invoke, rather than by telling the kselftest framework about
> each test program individually as is more standard. This has some
> unfortunate interactions with the kselftest harness:
>
> - If any of the tests hangs the harness will kill the entire mm
> selftests run rather than just the individual test, meaning no
> further tests get run.
> - The timeout applied by the harness is applied to the whole run rather
> than an individual test which frequently leads to the suite not being
> completed in production testing.
>
> Deploy a crude but effective mitigation for these issues by telling the
> kselftest framework to run each of the test categories that run_vmtests.sh
> has separately. Since kselftest really wants to run test programs this
> is done by providing a trivial wrapper script for each categorty that
> invokes run_vmtest.sh, this is not a thing of great elegence but it is
> clear and simple. Since run_vmtests.sh is doing runtime support
> detection, scenario enumeration and setup for many of the tests we can't
> consistently tell the framework about the individual test programs.
>
> This has the side effect of reordering the tests, hopefully the testing
> is not overly sensitive to this.
Thanks, let's see what people think.
What happens with tests which are newly added but which don't integrate
into this new framework? eg,
https://lkml.kernel.org/r/20260120123239.909882-2-linmiaohe@huawei.com
Powered by blists - more mailing lists