lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 1 Dec 2022 07:22:29 -0800
From:   Dave Taht <dave.taht@...il.com>
To:     Daniele Palmas <dnlplm@...il.com>
Cc:     David Miller <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Eric Dumazet <edumazet@...gle.com>,
        Subash Abhinov Kasiviswanathan <quic_subashab@...cinc.com>,
        Sean Tranchetti <quic_stranche@...cinc.com>,
        Jonathan Corbet <corbet@....net>,
        Alexander Lobakin <alexandr.lobakin@...el.com>,
        Gal Pressman <gal@...dia.com>,
        Bjørn Mork <bjorn@...k.no>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        netdev@...r.kernel.org
Subject: Re: [PATCH net-next v2 0/3] add tx packets aggregation to ethtool and rmnet

On Thu, Dec 1, 2022 at 2:55 AM Daniele Palmas <dnlplm@...il.com> wrote:
>
> Hello Dave,
>
> Il giorno mer 30 nov 2022 alle ore 16:04 Dave Taht
> <dave.taht@...il.com> ha scritto:
> >
> > On Wed, Nov 30, 2022 at 5:15 AM Daniele Palmas <dnlplm@...il.com> wrote:
> > >
> > > Hello maintainers and all,
> > >
> > > this patchset implements tx qmap packets aggregation in rmnet and generic
> > > ethtool support for that.
> > >
> > > Some low-cat Thread-x based modems are not capable of properly reaching the maximum
> > > allowed throughput both in tx and rx during a bidirectional test if tx packets
> > > aggregation is not enabled.
> > >
> > > I verified this problem with rmnet + qmi_wwan by using a MDM9207 Cat. 4 based modem
> > > (50Mbps/150Mbps max throughput). What is actually happening is pictured at
> > > https://drive.google.com/file/d/1gSbozrtd9h0X63i6vdkNpN68d-9sg8f9/view
> >
> > Thank you for documenting which device this is. Is it still handing in
> > 150ms of bufferbloat in good conditions,
> > and 25 seconds or so in bad?
> >
>
> New Flent test results available at
> https://drive.google.com/drive/folders/1-rpeuM2Dg9rVdYCP0M84K4Ook5kcZTWc?usp=share_link
>
> From what I can understand, it seems to me a bit better, but not
> completely sure how much is directly related to the changes of v2.

Anything that can shorten the round trips being experienced by the
flows such as yours and wedge more data packets in would be a
goodness.

A switch to using the "BBR" congestion controller might be a vast
improvement over what I figure is your default of cubic. On both
server and client....

modprobe tcp_bbr
sysctl -w net.ipv4.tcp_congestion_control=bbr

And rerun your tests.

Over the years we've come up with multiple mechanisms for fixing this
on other network subsystems (bql, aql, tsq, etc, etc), but something
that could track a "completion" - where we knew the packet had finally
got "in the air" and out of the device would be best. It's kind of
hard to convince everyone to replace their congestion controller.

Any chance you could share these results with the maker of the test
tool you are primarily using? I'd like to think that they would
embrace adding some sort of simultaneous latency measurement to it,
and I would hope that that would help bring more minds to figuring out
how to solve the 25!! seconds worth of delay that can accumulate on
this path through the kernel

https://github.com/Zoxc/crusader is a nice, emerging tool, that runs
on all OSes (it's in rust) that does a network test more right and is
simpler than flent, by far. I especially like the staggered start
feature in it, as it would clearly show a second flow, trying to
start, with this kind of latency already in the system, failing
miserably.

My comments on your patchset are not a blocker to being accepted!

If you want to get a grip on how much better things "could be", slap
an instance of cake bandwidth 40mbit on the up, and 80mbit on the down
on your "good" network setup, watch the latencies fall to nearly zero,
watch a second flow starting late grab it's fair share of the
bandwidth almost immediately, and dream of the metaverse actually
working right...

> Regards,
> Daniele



-- 
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ