lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240305.thuo4ahNaeng@digikod.net>
Date: Tue, 5 Mar 2024 17:39:44 +0100
From: Mickaël Salaün <mic@...ikod.net>
To: Przemek Kitszel <przemyslaw.kitszel@...el.com>
Cc: Jakub Kicinski <kuba@...nel.org>, Mark Brown <broonie@...nel.org>, 
	keescook@...omium.org, davem@...emloft.net, netdev@...r.kernel.org, edumazet@...gle.com, 
	pabeni@...hat.com, shuah@...nel.org, linux-kselftest@...r.kernel.org, 
	linux-security-module@...r.kernel.org, jakub@...udflare.com
Subject: Re: [PATCH v4 00/12] selftests: kselftest_harness: support using
 xfail

On Tue, Mar 05, 2024 at 05:00:13PM +0100, Mickaël Salaün wrote:
> On Tue, Mar 05, 2024 at 04:48:06PM +0100, Przemek Kitszel wrote:
> > On 3/5/24 00:04, Jakub Kicinski wrote:
> > > On Mon, 4 Mar 2024 22:20:03 +0000 Mark Brown wrote:
> > > > On Wed, Feb 28, 2024 at 04:59:07PM -0800, Jakub Kicinski wrote:
> > > > 
> > > > > When running selftests for our subsystem in our CI we'd like all
> > > > > tests to pass. Currently some tests use SKIP for cases they
> > > > > expect to fail, because the kselftest_harness limits the return
> > > > > codes to pass/fail/skip. XFAIL which would be a great match
> > > > > here cannot be used.
> > > > > 
> > > > > Remove the no_print handling and use vfork() to run the test in
> > > > > a different process than the setup. This way we don't need to
> > > > > pass "failing step" via the exit code. Further clean up the exit
> > > > > codes so that we can use all KSFT_* values. Rewrite the result
> > > > > printing to make handling XFAIL/XPASS easier. Support tests
> > > > > declaring combinations of fixture + variant they expect to fail.
> > > > 
> > > > This series landed in -next today and has caused breakage on all
> > > > platforms in the ALSA pcmtest-driver test.  When run on systems that
> > > > don't have the driver it needs loaded the test skip but since this
> > > > series was merged skipped tests are logged but then reported back as
> > > > failures:
> > > > 
> > > > # selftests: alsa: test-pcmtest-driver
> > > > # TAP version 13
> > > > # 1..5
> > > > # # Starting 5 tests from 1 test cases.
> > > > # #  RUN           pcmtest.playback ...
> > > > # #      SKIP      Can't read patterns. Probably, module isn't loaded
> > > > # # playback: Test failed
> > > > # #          FAIL  pcmtest.playback
> > > > # not ok 1 pcmtest.playback #  Can't read patterns. Probably, module isn't loaded
> > > > # #  RUN           pcmtest.capture ...
> > > > # #      SKIP      Can't read patterns. Probably, module isn't loaded
> > > > # # capture: Test failed
> > > > # #          FAIL  pcmtest.capture
> > > > # not ok 2 pcmtest.capture #  Can't read patterns. Probably, module isn't loaded
> > > > # #  RUN           pcmtest.ni_capture ...
> > > > # #      SKIP      Can't read patterns. Probably, module isn't loaded
> > > > # # ni_capture: Test failed
> > > > # #          FAIL  pcmtest.ni_capture
> > > > # not ok 3 pcmtest.ni_capture #  Can't read patterns. Probably, module isn't loaded
> > > > # #  RUN           pcmtest.ni_playback ...
> > > > # #      SKIP      Can't read patterns. Probably, module isn't loaded
> > > > # # ni_playback: Test failed
> > > > # #          FAIL  pcmtest.ni_playback
> > > > # not ok 4 pcmtest.ni_playback #  Can't read patterns. Probably, module isn't loaded
> > > > # #  RUN           pcmtest.reset_ioctl ...
> > > > # #      SKIP      Can't read patterns. Probably, module isn't loaded
> > > > # # reset_ioctl: Test failed
> > > > # #          FAIL  pcmtest.reset_ioctl
> > > > # not ok 5 pcmtest.reset_ioctl #  Can't read patterns. Probably, module isn't loaded
> > > > # # FAILED: 0 / 5 tests passed.
> > > > # # Totals: pass:0 fail:5 xfail:0 xpass:0 skip:0 error:0
> > > > 
> > > > I haven't completely isolated the issue due to some other breakage
> > > > that's making it harder that it should be to test.
> > > > 
> > > > A sample full log can be seen at:
> > > > 
> > > >     https://lava.sirena.org.uk/scheduler/job/659576#L1349
> > > 
> > > Thanks! the exit() inside the skip evaded my grep, I'm testing this:
> > > 
> > > diff --git a/tools/testing/selftests/alsa/test-pcmtest-driver.c b/tools/testing/selftests/alsa/test-pcmtest-driver.c
> > > index a52ecd43dbe3..7ab81d6f9e05 100644
> > > --- a/tools/testing/selftests/alsa/test-pcmtest-driver.c
> > > +++ b/tools/testing/selftests/alsa/test-pcmtest-driver.c
> > > @@ -127,11 +127,11 @@ FIXTURE_SETUP(pcmtest) {
> > >   	int err;
> > >   	if (geteuid())
> > > -		SKIP(exit(-1), "This test needs root to run!");
> > > +		SKIP(exit(KSFT_SKIP), "This test needs root to run!");
> > >   	err = read_patterns();
> > >   	if (err)
> > > -		SKIP(exit(-1), "Can't read patterns. Probably, module isn't loaded");
> > > +		SKIP(exit(KSFT_SKIP), "Can't read patterns. Probably, module isn't loaded");
> > >   	card_name = malloc(127);
> > >   	ASSERT_NE(card_name, NULL);
> > > diff --git a/tools/testing/selftests/mm/hmm-tests.c b/tools/testing/selftests/mm/hmm-tests.c
> > > index 20294553a5dd..356ba5f3b68c 100644
> > > --- a/tools/testing/selftests/mm/hmm-tests.c
> > > +++ b/tools/testing/selftests/mm/hmm-tests.c
> > > @@ -138,7 +138,7 @@ FIXTURE_SETUP(hmm)
> > >   	self->fd = hmm_open(variant->device_number);
> > >   	if (self->fd < 0 && hmm_is_coherent_type(variant->device_number))
> > > -		SKIP(exit(0), "DEVICE_COHERENT not available");
> > > +		SKIP(exit(KSFT_SKIP), "DEVICE_COHERENT not available");
> > >   	ASSERT_GE(self->fd, 0);
> > >   }
> > > @@ -149,7 +149,7 @@ FIXTURE_SETUP(hmm2)
> > >   	self->fd0 = hmm_open(variant->device_number0);
> > >   	if (self->fd0 < 0 && hmm_is_coherent_type(variant->device_number0))
> > > -		SKIP(exit(0), "DEVICE_COHERENT not available");
> > > +		SKIP(exit(KSFT_SKIP), "DEVICE_COHERENT not available");
> > >   	ASSERT_GE(self->fd0, 0);
> > >   	self->fd1 = hmm_open(variant->device_number1);
> > >   	ASSERT_GE(self->fd1, 0);
> > > 
> > > > but there's no more context.  I'm also seeing some breakage in the
> > > > seccomp selftests which also use kselftest-harness:
> > > > 
> > > > # #  RUN           TRAP.dfl ...
> > > > # # dfl: Test exited normally instead of by signal (code: 0)
> > > > # #          FAIL  TRAP.dfl
> > > > # not ok 56 TRAP.dfl
> > > > # #  RUN           TRAP.ign ...
> > > > # # ign: Test exited normally instead of by signal (code: 0)
> > > > # #          FAIL  TRAP.ign
> > > > # not ok 57 TRAP.ign
> > > 
> > > Ugh, I'm guessing vfork() "eats" the signal, IOW grandchild signals,
> > > child exits? vfork() and signals.. I'd rather leave to Kees || Mickael.
> > > 
> > 
> > Hi, sorry for not trying to reproduce it locally and still commenting,
> > but my vfork() man page says:
> > 
> > | The child must  not  return  from  the current  function  or  call
> > | exit(3) (which would have the effect of calling exit handlers
> > | established by the parent process and flushing the parent's stdio(3)
> > | buffers), but may call _exit(2).
> > 
> > And you still have some exit(3) calls.
> 
> Correct, exit(3) should be replaced with _exit(2).

Well, I think we should be good even if some exit(3) calls remain
because the envirenment in which the vfork() call happen is already
dedicated to the running test (with flushed stdio, setpgrp() call), see
__run_test() and the fork() call just before running the
fixture/test/teardown.  Even if the test configures its own exit
handlers, they will not be run by its parent because it never calls
exit(), and the returned function either ends with a call to _exit() or
a signal.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ