lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <69c4581e-09bd-4218-4d5f-d39564bce9bc@linuxfoundation.org>
Date:   Fri, 21 Jan 2022 09:27:56 -0700
From:   Shuah Khan <skhan@...uxfoundation.org>
To:     Jamal Hadi Salim <jhs@...atatu.com>,
        Davide Caratti <dcaratti@...hat.com>
Cc:     Victor Nogueira <victor@...atatu.com>,
        Baowen Zheng <baowen.zheng@...igine.com>,
        Simon Horman <simon.horman@...igine.com>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        Marcelo Ricardo Leitner <marcelo.leitner@...il.com>,
        Vlad Buslov <vladbu@...dia.com>,
        David Ahern <dsahern@...il.com>, shuah@...nel.org,
        Shuah Khan <skhan@...uxfoundation.org>
Subject: Re: tdc errors

On 1/21/22 7:11 AM, Jamal Hadi Salim wrote:
> On 2022-01-21 04:36, Davide Caratti wrote:
>> On Thu, Jan 20, 2022 at 8:34 PM Jamal Hadi Salim <jhs@...atatu.com> wrote:
> 
> [..]>>
>>> So... How is the robot not reporting this as a regression?
>>> Davide? Basically kernel has the feature but code is missing
>>> in both iproute2 and iproute2-next..
>>
>> my guess (but it's only a guess) is that also the tc-testing code is
>> less recent than the code of the kernel under test, so it does not not
>> contain new items (like 7d64).
> 
> Which kernel(s) + iproute2 version does the bot test?
> In this case, the tdc test is in the kernel already..
> So in my opinion shouldve just ran and failed and a report
> sent indicating failure. Where do the reports go?
> 
> +Cc Shuah.
> 
>> But even if we had the latest net-next test code and the latest
>> net-next kernel under test, we would anyway see unstable test results,
>> because of the gap with iproute2 code.  My suggestion is to push new
>> tdc items (that require iproute2 bits, or some change to the kernel
>> configuration in the build environment) using 'skip: yes' in the JSON
>> (see [1]), and enable them only when we are sure that all the code
>> propagated at least to stable trees.
>>
>> wdyt?
>>
> 
> That's better than current status quo but: still has  human dependency
> IMO. If we can remove human dependency the bot can do a better job.
> Example:
> One thing that is often a cause of failures in tdc is kernel config.
> A lot of tests fail because the kernel doesnt have the config compiled
> in.
> Today, we work around that by providing a kernel config file in tdc.
> Unfortunately we dont use that config file for anything
> meaningful other than to tell the human what kernel options
> to ensure are compiled in before running the tests (manually).
> Infact the user has to inspect the config file first.
> 
> One idea that will help in automation is as follows:
> Could we add a "environment dependency" check that will ensure
> for a given test the right versions of things and configs exist?
> Example check if CONFIG_NET_SCH_ETS is available in the running
> kernel before executing "ets tests" or we have iproute2 version
>  >= blah before running the policer test with skip_sw feature etc
> I think some of this can be done via the pre-test-suite but we may
> need granularity at per-test level.
> 

Several tests check for config support for their dependencies in their
test code - I don't see any of those in tc-testing. Individual tests
are supposed to check for not just the config dependencies, but also
any feature dependency e.g syscall/ioctl.

Couple of way to fix this problem for tc-testing - enhance the test to
check for dependencies and skip with a clear message on why test is
skipped.

A second option is enhancing the tools/testing/selftests/kselftest_deps.sh
script that checks for build depedencies. This tool can be enhanced easily
to check for run-time dependencies and use this in your automation.

Usage: ./kselftest_deps.sh -[p] <compiler> [test_name]

	kselftest_deps.sh [-p] gcc
	kselftest_deps.sh [-p] gcc vm
	kselftest_deps.sh [-p] aarch64-linux-gnu-gcc
	kselftest_deps.sh [-p] aarch64-linux-gnu-gcc vm

- Should be run in selftests directory in the kernel repo.
- Checks if Kselftests can be built/cross-built on a system.
- Parses all test/sub-test Makefile to find library dependencies.
- Runs compile test on a trivial C file with LDLIBS specified
   in the test Makefiles to identify missing library dependencies.
- Prints suggested target list for a system filtering out tests
   failed the build dependency check from the TARGETS in Selftests
   main Makefile when optional -p is specified.
- Prints pass/fail dependency check for each tests/sub-test.
- Prints pass/fail targets and libraries.
- Default: runs dependency checks on all tests.
- Optional test name can be specified to check dependencies for it.

thanks,
-- Shuah


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ