lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 15 Nov 2019 11:21:23 -0500
From:   Willem de Bruijn <willemdebruijn.kernel@...il.com>
To:     Naresh Kamboju <naresh.kamboju@...aro.org>
Cc:     "open list:KERNEL SELFTEST FRAMEWORK" 
        <linux-kselftest@...r.kernel.org>, Netdev <netdev@...r.kernel.org>,
        Shuah Khan <shuah@...nel.org>,
        Anders Roxell <anders.roxell@...aro.org>,
        lkft-triage@...ts.linaro.org,
        "David S. Miller" <davem@...emloft.net>
Subject: Re: selftest/net: so_txtime.sh fails intermittently - read Resource
 temporarily unavailable

On Thu, Nov 14, 2019 at 3:47 AM Naresh Kamboju
<naresh.kamboju@...aro.org> wrote:
>
> selftests net so_txtime.sh fails intermittently on multiple boards and
> linux next and mainline.

This is a time based test, so that is definitely possible. I had to
trade off sensitivity to variance against total running time.

Current tests schedule delivery in the future at 10 msec timescale. It
succeeds if dequeue happens at programmed departure time += 2 msec of
variance (cfg_variance_us).

Note that statements of this kind are not errors and are printed every time:

> # SO_TXTIME ipv6 clock monotonic
> ipv6: clock_monotonic #
> # payloada delay452 expected0 (us)
> delay452: expected0_(us) #

This seems like an error with clockid CLOCK_TAI, which means with qdisc ETF.

> # SO_TXTIME ipv6 clock tai
> ipv6: clock_tai #
> # ./so_txtime read Resource temporarily unavailable
> read: Resource_temporarily #
> #
> : _ #
> # SO_TXTIME ipv6 clock tai
> ipv6: clock_tai #
> # ./so_txtime read Resource temporarily unavailable
> read: Resource_temporarily #

Let me check a few other runs on the dashboard, too.

> [FAIL] 24 selftests net so_txtime.sh # exit=1
> selftests: net_so_txtime.sh [FAIL]
>
> Test run full log,
> https://lkft.validation.linaro.org/scheduler/job/1010545#L1234
>
> Test results comparison link,
> https://qa-reports.linaro.org/lkft/linux-next-oe/tests/kselftest/net_so_txtime.sh
> https://qa-reports.linaro.org/lkft/linux-mainline-oe/tests/kselftest/net_so_txtime.sh

This appears to have been flaky from the start, particularly on qemu_arm.

Looking at a few runs..

failing runs exceeds bounds:
https://lkft.validation.linaro.org/scheduler/job/1006586
https://lkft.validation.linaro.org/scheduler/job/1010686
https://lkft.validation.linaro.org/scheduler/job/1010630

"
delay22049: expected20000_(us) #
# ./so_txtime exceeds variance (2000 us)
"

"
delay13700: expected10000_(us) #
# ./so_txtime exceeds variance (2000 us)
"
"
delay29722: expected20000_(us) #
# ./so_txtime exceeds variance (2000 us)
"

These are easy to suppress, by just increasing cfg_variance_us and
optionally also increasing the delivery time scale.

failing run hit the "read: Resource temporarily unavailable" on TAI,
like this report
https://lkft.validation.linaro.org/scheduler/job/1008681

It is not absence of CONFIG_NET_SCH_ETF. That is compiled in (as
module) in these runs, according to the kernel config linked in the
dashboard.

The recv call must return EAGAIN because it reaches the SO_RCVTIMEO
timeout, set to 100 msec. So the packet was lost. I don't immediately
have an explanation for this. Will try to run my own qemu-arm
instance.

Naresh, when you mention "multiple boards" are there specific
microarchitectural details of the hosts that I should take into
account aside from the qemu-arm virtualized environment itself?

passing run detects missing ETF and skips those:
https://lkft.validation.linaro.org/scheduler/job/1006511

That is peculiar, as the dashboard for that run also shows it as available.

Very nice dashboard, btw!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ