lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <661d466b7c11b_1073d29442@willemb.c.googlers.com.notmuch>
Date: Mon, 15 Apr 2024 11:23:23 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Jakub Kicinski <kuba@...nel.org>, 
 Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc: davem@...emloft.net, 
 netdev@...r.kernel.org, 
 edumazet@...gle.com, 
 pabeni@...hat.com, 
 shuah@...nel.org, 
 petrm@...dia.com, 
 linux-kselftest@...r.kernel.org, 
 willemb@...gle.com
Subject: Re: [PATCH net-next 1/5] selftests: drv-net: define endpoint
 structures

Jakub Kicinski wrote:
> On Sun, 14 Apr 2024 13:04:46 -0400 Willem de Bruijn wrote:
> > 1. Cleaning up remote state in all conditions, including timeout/kill.
> > 
> >    Some tests require a setup phase before the test, and a matching
> >    cleanup phase. If any of the configured state is variable (even
> >    just a randomized filepath) this needs to be communicated to the
> >    cleanup phase. The remote filepath is handled well here. But if
> >    a test needs per-test setup? Say, change MTU or an Ethtool feature.
> >    Multiple related tests may want to share a setup/cleanup.
> > 
> >    Related: some tests may need benefit from a lightweight stateless
> >    check phase to detect preconditions before committing to any setup.
> >    Again, say an Ethtool feature like rx-gro-hw, or AF_XDP metadata rx.
> 
> I think this falls into the "frameworking debate" we were having with
> Petr. The consensus seems to be to keep things as simple as possible.

Makes sense. We can find the sticking points as we go along.

tools/testing/selftests/net already has a couple of hardware feature
tests, that probably see little use now that they require manual
testing (csum, gro, toeplitz, ..). Really excited to include them in
this infra to hopefully see more regular testing across more hardware.

> If we see that tests are poorly written and would benefit from extra
> structure we should try impose some, but every local custom is
> something people will have to learn.

The above were just observations from embedding tests like those
mentioned in our internal custom test framework. Especially with
heterogenous hardware, a lot of it is "can we run this test on this
platform", or "disable this feature as it interacts with the tested
feature" (e.g., HW-GRO and csum.c).

> timeout/kill is provided to us already by the kselftest harness.
> 
> > 2. Synchronizing peers. Often both peers need to be started at the
> >    same time, but then the client may need to wait until the server
> >    is listening. Paolo added a nice local script to detect a listening
> >    socket with sockstat. Less of a problem with TCP tests than UDP or
> >    raw packet tests.
> 
> Yes, definitely. We should probably add that with the first test that
> needs it.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ