[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240801070740.4ae582df@kernel.org>
Date: Thu, 1 Aug 2024 07:07:40 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Petr Machata <petrm@...dia.com>
Cc: Stanislav Fomichev <sdf@...ichev.me>, <netdev@...r.kernel.org>,
<davem@...emloft.net>, <edumazet@...gle.com>, <pabeni@...hat.com>, Shuah
Khan <shuah@...nel.org>, Joe Damato <jdamato@...tly.com>,
<linux-kselftest@...r.kernel.org>
Subject: Re: [PATCH net-next v2 2/2] selftests: net: ksft: support marking
tests as disruptive
On Thu, 1 Aug 2024 10:36:18 +0200 Petr Machata wrote:
> You seem to be right about the exit code. This was discussed some time
> ago, that SKIP is considered a sort of a failure. As the person running
> the test you would want to go in and fix whatever configuration issue is
> preventing the test from running. I'm not sure how it works in practice,
> whether people look for skips in the test log explicitly or rely on exit
> codes.
>
> Maybe Jakub can chime in, since he's the one that cajoled me into
> handling this whole SKIP / XFAIL business properly in bash selftests.
For HW testing there is a lot more variables than just "is there some
tool missing in the VM image". Not sure how well we can do in detecting
HW capabilities and XFAILing without making the tests super long.
And this case itself is not very clear cut. On one hand, you expect
the test not to run if it's disruptive and executor can't deal with
disruptive - IOW it's an eXpected FAIL. But it is an executor
limitation, the device/driver could have been tested if it wasn't
for the executor, so not entirely dissimilar to a tool missing.
Either way - no strong opinion as of yet, we need someone to actually
continuously run these to get experience :(
Powered by blists - more mailing lists