lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 21 Jan 2022 09:11:51 -0500
From:   Jamal Hadi Salim <jhs@...atatu.com>
To:     Davide Caratti <dcaratti@...hat.com>
Cc:     Victor Nogueira <victor@...atatu.com>,
        Baowen Zheng <baowen.zheng@...igine.com>,
        Simon Horman <simon.horman@...igine.com>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        Marcelo Ricardo Leitner <marcelo.leitner@...il.com>,
        Vlad Buslov <vladbu@...dia.com>,
        David Ahern <dsahern@...il.com>, shuah@...nel.org
Subject: Re: tdc errors

On 2022-01-21 04:36, Davide Caratti wrote:
> On Thu, Jan 20, 2022 at 8:34 PM Jamal Hadi Salim <jhs@...atatu.com> wrote:

[..]>>
>> So... How is the robot not reporting this as a regression?
>> Davide? Basically kernel has the feature but code is missing
>> in both iproute2 and iproute2-next..
> 
> my guess (but it's only a guess) is that also the tc-testing code is
> less recent than the code of the kernel under test, so it does not not
> contain new items (like 7d64).

Which kernel(s) + iproute2 version does the bot test?
In this case, the tdc test is in the kernel already..
So in my opinion shouldve just ran and failed and a report
sent indicating failure. Where do the reports go?

+Cc Shuah.

> But even if we had the latest net-next test code and the latest
> net-next kernel under test, we would anyway see unstable test results,
> because of the gap with iproute2 code.  My suggestion is to push new
> tdc items (that require iproute2 bits, or some change to the kernel
> configuration in the build environment) using 'skip: yes' in the JSON
> (see [1]), and enable them only when we are sure that all the code
> propagated at least to stable trees.
> 
> wdyt?
> 

That's better than current status quo but: still has  human dependency
IMO. If we can remove human dependency the bot can do a better job.
Example:
One thing that is often a cause of failures in tdc is kernel config.
A lot of tests fail because the kernel doesnt have the config compiled
in.
Today, we work around that by providing a kernel config file in tdc.
Unfortunately we dont use that config file for anything
meaningful other than to tell the human what kernel options
to ensure are compiled in before running the tests (manually).
Infact the user has to inspect the config file first.

One idea that will help in automation is as follows:
Could we add a "environment dependency" check that will ensure
for a given test the right versions of things and configs exist?
Example check if CONFIG_NET_SCH_ETS is available in the running
kernel before executing "ets tests" or we have iproute2 version
 >= blah before running the policer test with skip_sw feature etc
I think some of this can be done via the pre-test-suite but we may
need granularity at per-test level.

cheers,
jamal

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ