[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251103160133.31c856a4@kernel.org>
Date: Mon, 3 Nov 2025 16:01:33 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Sabrina Dubroca <sd@...asysnail.net>
Cc: Wang Liang <wangliang74@...wei.com>, andrew@...n.ch,
davem@...emloft.net, edumazet@...gle.com, pabeni@...hat.com,
shuah@...nel.org, horms@...nel.org, netdev@...r.kernel.org,
linux-kselftest@...r.kernel.org, linux-kernel@...r.kernel.org,
yuehaibing@...wei.com, zhangchangzhong@...wei.com
Subject: Re: [PATCH net] selftests: netdevsim: Fix ethtool-features.sh fail
On Mon, 3 Nov 2025 11:13:08 +0100 Sabrina Dubroca wrote:
> 2025-10-30, 17:02:17 -0700, Jakub Kicinski wrote:
> > On Fri, 31 Oct 2025 00:13:59 +0100 Sabrina Dubroca wrote:
> > > I guess it's improving the situation, but I've got a system with an
> > > ethtool that accepts the --json argument, but silently ignores it for
> > > -k (ie `ethtool --json -k $DEV` succeeds but doesn't produce a json
> > > output), which will still cause the test to fail later.
> >
> > And --json was added to -k in Jan 2022, that's pretty long ago.
> > I'm not sure we need this aspect of the patch at all..
>
> Ok. Then maybe a silly idea: for the tests that currently have some
> form of "$TOOL is too old" check, do we want to remove those after a
> while? If so, how long after the feature was introduced in $TOOL?
>
> Or should we leave them, but not accept new checks to exclude
> really-old versions of tools? Do we need to document the cut-off ("we
> don't support tool versions older than 2 years for networking
> selftests" [or similar]) somewhere in Documentation/ ?
FWIW my current thinking is to prioritize test development and kernel
needs over the ability to run ksft on random old set of tools and have
clean skips. IOW avoid complicating writing tests by making the author
also responsible for testing versions of all tools.
The list of tools which need to be updated or installed for all
networking tests to pass is rather long. My uneducated guess
is all these one off SKIP patches don't amount to much. Here for
example author is fixing one test, I'm pretty sure that far more
tests depend on -k --json.
Integrating with NIPA is not that hard, if someone cares about us
ensuring that the tests cleanly pass or skip in their env they should
start by reporting results to NIPA..
Powered by blists - more mailing lists