lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 18 Nov 2021 16:10:50 +0530
From:   Naresh Kamboju <naresh.kamboju@...aro.org>
To:     Tim Lewis <elatllat@...il.com>
Cc:     open list <linux-kernel@...r.kernel.org>,
        lkft-triage@...ts.linaro.org,
        "open list:KERNEL SELFTEST FRAMEWORK" 
        <linux-kselftest@...r.kernel.org>,
        Anders Roxell <anders.roxell@...aro.org>
Subject: Re: Re: [PATCH 5.10 000/578] 5.10.80-rc2 review

+ Kernel Selftest
+ Anders

Hi Tim,

Thanks for your email.

On Wed, 17 Nov 2021 at 20:07, Tim Lewis <elatllat@...il.com> wrote:
>
> > No regressions on arm64, arm, x86_64, and i386.
>
> I got
> proc-uptime-001: proc-uptime-001.c:39: main: Assertion `i1 >= i0' failed.

It is a known intermittent failure due to test running more than expected time
and runner script killed it.

I have noticed intermittent failures on slow devices.

You can see the history of the test case on Linux next here
intermittently failing.
I do compare between the stable-rc branches, Linux mainline and next.
https://qa-reports.linaro.org/lkft/linux-next-master/build/next-20210924/testrun/5897899/suite/kselftest-proc/test/proc.proc-uptime-001/history/


> I don't see proc-uptime-001 on
> https://github.com/Linaro/test-definitions/blob/master/automated/linux/kselftest/skipfile-lkft.yaml

We will add this as known intermittent failure.
It would be great if we report this to the test author and ask them to
review the test case for
the reason for long run time on slow devices.

>
> my proc-uptime-001 history

In general when a test fails,
Please re-run the test independently for 10 times or more on the same
kernel / device before we report it as regression.

> 5.10.80-rc2-dirty:not ok 10 selftests: proc: proc-uptime-001 # exit=134

exit=134 which means Aborted.
When the test runs more than X time (45 sec i guess) the script will
be killed by the runner script.

> 5.10.80-rc1-dirty:ok 10 selftests: proc: proc-uptime-001

This test log details gives more insight that the test was timeout and Aborted.
Test output log:
--------------------
# selftests: proc: proc-uptime-001
[   43.200262] audit: type=1701 audit(1618432600.255:6):
auid=4294967295 uid=0 gid=0 ses=4294967295 pid=11758
comm=\"proc-uptime-001\"
exe=\"/opt/kselftest_intree/proc/proc-uptime-001\" sig=6 res=1
# proc-uptime-001: proc-uptime-001.c:39: main: Assertion `i1 >= i0' failed.
# /usr/bin/tim[   43.224097] audit: type=1701 audit(1618432600.259:7):
auid=4294967295 uid=0 gid=0 ses=4294967295 pid=11756 comm=\"timeout\"
exe=\"/usr/bin/timeout.coreutils\" sig=6 res=1
eout: the monitored command dumped core
# ./kselftest/runner.sh: line 33: 11756 Aborted
/usr/bin/timeout --foreground \"$kselftest_timeout\" \"$1\"
not ok 11 selftests: proc: proc-uptime-001 # exit=134

However, It is good to find that system running slowly.

- Naresh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ