lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240131102932.6caac1e2@kernel.org>
Date: Wed, 31 Jan 2024 10:29:32 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>, pabeni@...hat.com
Cc: netdev@...r.kernel.org, davem@...emloft.net, edumazet@...gle.com,
 linux-kselftest@...r.kernel.org, Willem de Bruijn <willemb@...gle.com>
Subject: Re: [PATCH net-next] selftests/net: calibrate txtimestamp

On Wed, 31 Jan 2024 10:06:18 -0500 Willem de Bruijn wrote:
> > Willem, do you still want us to apply this as is or should we do 
> > the 10x only if [ x$KSFT_MACHINE_SLOW != x ] ?  
> 
> If the test passes on all platforms with this change, I think that's
> still preferable.
> 
> The only downside is that it will take 10x runtime. But that will
> continue on debug and virtualized builds anyway.
> 
> On the upside, the awesome dash does indicate that it passes as is on
> non-debug metal instances:
> 
> https://netdev.bots.linux.dev/contest.html?test=txtimestamp-sh
> 
> Let me know if you want me to use this as a testcase for
> $KSFT_MACHINE_SLOW.

Ah, all good, I thought your increasing the acceptance criteria.

> Otherwise I'll start with the gro and so-txtime tests. They may not
> be so easily calibrated. As we cannot control the gro timeout, nor
> the FQ max horizon.

Paolo also mentioned working on GRO, maybe we need a spreadsheet
for people to "reserve" broken tests to avoid duplicating work? :S

> In such cases we can use the environment variable to either skip the
> test entirely or --my preference-- run it to get code coverage, but
> suppress a failure if due to timing (only). Sounds good?

+1 I also think we should run and ignore failure. I was wondering if we
can swap FAIL for XFAIL in those cases:

tools/testing/selftests/kselftest.h
#define KSFT_XFAIL 2

Documentation/dev-tools/ktap.rst
- "XFAIL", which indicates that a test is expected to fail. This
  is similar to "TODO", above, and is used by some kselftest tests.

IDK if that's a stretch or not. Or we can just return PASS with 
a comment?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ