[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+G9fYu=GXCZTQHU2kX0yoUxPgWkKVF44NJhadTP07uHF9St3g@mail.gmail.com>
Date: Wed, 20 Nov 2019 12:02:45 +0530
From: Naresh Kamboju <naresh.kamboju@...aro.org>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc: "open list:KERNEL SELFTEST FRAMEWORK"
<linux-kselftest@...r.kernel.org>, Netdev <netdev@...r.kernel.org>,
Shuah Khan <shuah@...nel.org>,
Anders Roxell <anders.roxell@...aro.org>,
lkft-triage@...ts.linaro.org,
"David S. Miller" <davem@...emloft.net>
Subject: Re: selftest/net: so_txtime.sh fails intermittently - read Resource
temporarily unavailable
On Fri, 15 Nov 2019 at 21:52, Willem de Bruijn
<willemdebruijn.kernel@...il.com> wrote:
>
> On Thu, Nov 14, 2019 at 3:47 AM Naresh Kamboju
> This appears to have been flaky from the start, particularly on qemu_arm.
This is because of emulating 2 CPU.
I am gonna change this to emulate 4 CPU for qemu_arm.
>
> Looking at a few runs..
>
> failing runs exceeds bounds:
> https://lkft.validation.linaro.org/scheduler/job/1006586
...
> delay29722: expected20000_(us) #
> # ./so_txtime exceeds variance (2000 us)
> "
> These are easy to suppress, by just increasing cfg_variance_us and
> optionally also increasing the delivery time scale.
Alright !
The variance is 2000.
static int cfg_variance_us = 2000
> Naresh, when you mention "multiple boards" are there specific
> microarchitectural details of the hosts that I should take into
> account aside from the qemu-arm virtualized environment itself?
The easy to reproduce way is running 32-bit kernel and rootfs on
x86_64 machine.
arm32 bit beagleboard x15 device.
qemu-arm command line,
qemu-system-aarch64 -cpu host,aarch64=off -machine virt-2.10,accel=kvm
-nographic -net nic,model=virtio,macaddr=BA:DD:AD:CC:09:02 -net tap -m
2048 -monitor none -kernel zImage --append "console=ttyAMA0
root=/dev/vda rw" -drive
format=raw,file=rpb-console-image-lkft-am57xx-evm-20191112073604-644.rootfs.ext4,if=virtio
-m 4096 -smp 2 -nographic
> Very nice dashboard, btw!
Thanks for your valuable feedback. Great to hear this :-)
- Naresh
Powered by blists - more mailing lists