[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4cd4c178-4dcc-4a31-98f7-48b870380d5f@lunn.ch>
Date: Mon, 3 Nov 2025 16:31:12 +0100
From: Andrew Lunn <andrew@...n.ch>
To: Sabrina Dubroca <sd@...asysnail.net>
Cc: Jakub Kicinski <kuba@...nel.org>, Wang Liang <wangliang74@...wei.com>,
davem@...emloft.net, edumazet@...gle.com, pabeni@...hat.com,
shuah@...nel.org, horms@...nel.org, netdev@...r.kernel.org,
linux-kselftest@...r.kernel.org, linux-kernel@...r.kernel.org,
yuehaibing@...wei.com, zhangchangzhong@...wei.com
Subject: Re: [PATCH net] selftests: netdevsim: Fix ethtool-features.sh fail
On Mon, Nov 03, 2025 at 04:01:00PM +0100, Sabrina Dubroca wrote:
> 2025-11-03, 14:36:00 +0100, Andrew Lunn wrote:
> > On Mon, Nov 03, 2025 at 11:13:08AM +0100, Sabrina Dubroca wrote:
> > > 2025-10-30, 17:02:17 -0700, Jakub Kicinski wrote:
> > > > On Fri, 31 Oct 2025 00:13:59 +0100 Sabrina Dubroca wrote:
> > > > > > set -o pipefail
> > > > > >
> > > > > > +if ! ethtool --json -k $NSIM_NETDEV > /dev/null 2>&1; then
> > > > >
> > > > > I guess it's improving the situation, but I've got a system with an
> > > > > ethtool that accepts the --json argument, but silently ignores it for
> > > > > -k (ie `ethtool --json -k $DEV` succeeds but doesn't produce a json
> > > > > output), which will still cause the test to fail later.
> > > >
> > > > And --json was added to -k in Jan 2022, that's pretty long ago.
> > > > I'm not sure we need this aspect of the patch at all..
> > >
> > > Ok. Then maybe a silly idea: for the tests that currently have some
> > > form of "$TOOL is too old" check, do we want to remove those after a
> > > while? If so, how long after the feature was introduced in $TOOL?
> >
> > Another option is to turn them into a hard fail, after X years.
>
> If the "skip if too old" check is removed, the test will fail when run
> with old tools (because whatever feature is needed will not be
> supported, so somewhere in the middle of test execution there will be
> a failure - but the developer will have to figure out "tool too old"
> from some random command failing).
Which is not great. It would be much better is the failure message
was: 'ethtool: your version is more than $X years old. Please upgrade'
We could also embed the date the requirement was added into the
test. So when $X years have past, the test will automatically start
failing, no additional work for the test maintainer.
> > My
> > guess is, tests which get skipped because the test tools are too old
> > frequently get ignored. Tests which fail are more likely to be looked
> > at, and the tools updated.
> >
> > Another idea is have a dedicated test which simply tests the versions
> > of all the tools. And it should only pass if the installed tools are
> > sufficiently new that all test can pass. If you have tools which are
> > in the grey zone between too old to cause skips, but not old enough to
> > cause fails, you then just have one failing test you need to turn a
> > blind eye to.
>
> That's assumming people run all the tests every time. Is that really
> the case, or do people often run the 2-5 tests that cover the area
> they care about? For example it doesn't make much sense to run nexthop
> and TC tests for a macsec patch (and the other way around). If my
> iproute is too old to run some nexthop or TC tests, I can still run
> the tests I really need for my patch.
>
> But maybe if the tests are run as "run everything" (rather than
> manually running a few of them), ensuring all the needed tools are
> recent enough makes sense.
I've not do any of this sort of testing for kernel work, but i have
for other projects. As a developer i tend to manually run the test of
interest to get the feature working. I then throw the code at a
Jenkins instance which runs all the tests, just to find if i've
accidentally broke something elsewhere. It happens, there is a side
effect i did not spot, etc. Regression testing tends to run
everything, possibly every day, otherwise on each change set. It costs
no developer time, other than looking at the status board the next
day.
Andrew
Powered by blists - more mailing lists