lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87jzgz6i19.fsf@nvidia.com>
Date: Thu, 1 Aug 2024 23:31:02 +0200
From: Petr Machata <petrm@...dia.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: Petr Machata <petrm@...dia.com>, Stanislav Fomichev <sdf@...ichev.me>,
	<netdev@...r.kernel.org>, <davem@...emloft.net>, <edumazet@...gle.com>,
	<pabeni@...hat.com>, Shuah Khan <shuah@...nel.org>, Joe Damato
	<jdamato@...tly.com>, <linux-kselftest@...r.kernel.org>
Subject: Re: [PATCH net-next v2 2/2] selftests: net: ksft: support marking
 tests as disruptive


Jakub Kicinski <kuba@...nel.org> writes:

> On Thu, 1 Aug 2024 10:36:18 +0200 Petr Machata wrote:
>> You seem to be right about the exit code. This was discussed some time
>> ago, that SKIP is considered a sort of a failure. As the person running
>> the test you would want to go in and fix whatever configuration issue is
>> preventing the test from running. I'm not sure how it works in practice,
>> whether people look for skips in the test log explicitly or rely on exit
>> codes.
>> 
>> Maybe Jakub can chime in, since he's the one that cajoled me into
>> handling this whole SKIP / XFAIL business properly in bash selftests.
>
> For HW testing there is a lot more variables than just "is there some
> tool missing in the VM image". Not sure how well we can do in detecting
> HW capabilities and XFAILing without making the tests super long.
> And this case itself is not very clear cut. On one hand, you expect 
> the test not to run if it's disruptive and executor can't deal with
> disruptive - IOW it's an eXpected FAIL. But it is an executor
> limitation, the device/driver could have been tested if it wasn't
> for the executor, so not entirely dissimilar to a tool missing.
>
> Either way - no strong opinion as of yet, we need someone to actually
> continuously run these to get experience :(

After sending my response I realized we talked about this once already.
Apparently I forgot.

I think it's odd that SKIP is a fail in one framework but a pass in
another. But XFAIL is not a good name for something that was not even
run. And if we add something like "omit", nobody will know what it
means.

Ho hum.

Let's keep SKIP as passing in Python tests then...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ