lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 10 Nov 2021 15:54:56 +0200
From:   Leonard Crestez <cdleonard@...il.com>
To:     David Ahern <dsahern@...il.com>, Shuah Khan <shuah@...nel.org>
Cc:     "David S. Miller" <davem@...emloft.net>,
        Ido Schimmel <idosch@...dia.com>,
        Seth David Schoen <schoen@...alty.org>,
        netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
        Jakub Kicinski <kuba@...nel.org>,
        David Ahern <dsahern@...nel.org>,
        linux-kselftest@...r.kernel.org
Subject: Re: [PATCH 08/11] selftests: net/fcnal: Replace sleep after server
 start with -k

On 10/7/21 4:22 AM, David Ahern wrote:
> On 10/6/21 3:35 PM, Leonard Crestez wrote:
>>
>> I counted the [FAIL] or [ OK ] markers but not the output of nettest
>> itself. I don't know what to look for, I guess I could diff the outputs?
>>
>> Shouldn't it be sufficient to compare the exit codes of the nettest client?
> 
> mistakes happen. The 700+ tests that exist were verified by me when I
> submitted the script - that each test passes when it should and fails
> when it should. "FAIL" has many reasons. I tried to have separate exit
> codes for nettest.c to capture the timeouts vs ECONNREFUSED, etc., but I
> could easily have made a mistake. scanning the output is the best way.
> Most of the 'supposed to fail' tests have a HINT saying why it should fail.

It is not good to have a test for which correctness is ambiguous to such 
an extent, it makes reliable future changes difficult. In theory an 
uniform TAP format is supposed to solve this but it is not applied 
inside selftests/net.

I attempted to write a script to compare two logs in their current 
format: https://gitlab.com/cdleonard/kselftest-parse-nettest-fcnal

It does a bunch of nasty scrubbing of irrelevant behavior and got it to 
the point where no diffs are found between repeated runs on the same 
machine.

One nasty issue was that many tests kill processes inside log_test so 
relevant output may be shown either before or after the "TEST: " result 
line. This was solved by associating output until the next ### with 
previous test.

>> The output is also modified by a previous change to not capture server
>> output separately and instead let it be combined with that of the
>> client. That change is required for this one, doing out=$(nettest -k)
>> does not return on fork unless the pipe is also closed.
>>
>> I did not look at your change, mine is relatively minimal because it
>> only changes who decide when the server goes into the background: the
>> shell script or the server itself. This makes it work very easily even
>> for tests with multiple server instances.
> 
> The logging issue is why I went with 1 binary do both server and client
> after nettest.c got support for changing namespaces.

It's possible to just compare the "client" and "server" logs separately 
by sorting them on their prefix.

I think a decent approach would be to do a bulk replace for all 
"run_cmd{,_nsb,_nsc} nettest" with a new "run_nettest" function that 
passes all arguments to nettest itself. That run_nettest function could 
include a leading common "-t" arg that is parsed at the top of 
fcnal-test.sh.

--
Regards,
Leonard

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ