lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA93jw6bWOU3wX5tubkTzOFxDMWXdgmBqnGPAnzZKVVFQTEUDQ@mail.gmail.com>
Date:   Thu, 6 May 2021 07:53:51 -0700
From:   Dave Taht <dave.taht@...il.com>
To:     Frieder Schrempf <frieder.schrempf@...tron.de>
Cc:     NXP Linux Team <linux-imx@....com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>
Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations

I am a big fan of bql - is that implemented on this driver?

cd /sys/class/net/your_device_name/queues/tx-0/byte_queue_limits/
cat limit

see also bqlmon from github

is fq_codel running on the ethernet interface? the iperf bidir test
does much better with that in place rather than a fifo. tc -s qdisc
show dev your_device

Also I tend to run tests using the flent tool, which will yield more
data. Install netperf and irtt on the target, flent, netperf, irtt on
the test driver box...

flent -H the-target-ip -x --socket-stats -t whateveryouaretesting rrul
# the meanest bidir test there

flent-gui *.gz

On Thu, May 6, 2021 at 7:47 AM Frieder Schrempf
<frieder.schrempf@...tron.de> wrote:
>
> Hi,
>
> we observed some weird phenomenon with the Ethernet on our i.MX8M-Mini boards. It happens quite often that the measured bandwidth in TX direction drops from its expected/nominal value to something like 50% (for 100M) or ~67% (for 1G) connections.
>
> So far we reproduced this with two different hardware designs using two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.
>
> To measure the throughput we simply run iperf3 on the target (with a short p2p connection to the host PC) like this:
>
>         iperf3 -c 192.168.1.10 --bidir
>
> But even something more simple like this can be used to get the info (with 'nc -l -p 1122 > /dev/null' running on the host):
>
>         dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
>
> The results fluctuate between each test run and are sometimes 'good' (e.g. ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for 100M link).
> There is nothing else running on the system in parallel. Some more info is also available in this post: [1].
>
> If there's anyone around who has an idea on what might be the reason for this, please let me know!
> Or maybe someone would be willing to do a quick test on his own hardware. That would also be highly appreciated!
>
> Thanks and best regards
> Frieder
>
> [1]: https://community.nxp.com/t5/i-MX-Processors/i-MX8MM-Ethernet-TX-Bandwidth-Fluctuations/m-p/1242467#M170563



-- 
Latest Podcast:
https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/

Dave Täht CTO, TekLibre, LLC

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ