lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGRyCJFG0kybDzwYrdj2-Y868KbePCVBxFXsOo5TTJ_4PwrQDQ@mail.gmail.com>
Date:   Tue, 15 Nov 2022 12:51:13 +0100
From:   Daniele Palmas <dnlplm@...il.com>
To:     Dave Taht <dave.taht@...il.com>
Cc:     Gal Pressman <gal@...dia.com>, Jakub Kicinski <kuba@...nel.org>,
        David Miller <davem@...emloft.net>,
        Paolo Abeni <pabeni@...hat.com>,
        Eric Dumazet <edumazet@...gle.com>,
        Subash Abhinov Kasiviswanathan <quic_subashab@...cinc.com>,
        Sean Tranchetti <quic_stranche@...cinc.com>,
        Jonathan Corbet <corbet@....net>,
        Bjørn Mork <bjorn@...k.no>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        netdev@...r.kernel.org
Subject: Re: [PATCH net-next 1/3] ethtool: add tx aggregation parameters

Hello Dave,

Il giorno lun 14 nov 2022 alle ore 11:46 Dave Taht
<dave.taht@...il.com> ha scritto:
> > Tx packets aggregation allows to overcome this issue, so that a single
> > URB holds N qmap packets, reducing URBs frequency.
>
> While I understand the use case... it's generally been my hope we got
> to a BQL-like mechanism for
> 4G and 5G that keeps the latency under control. Right now, that can be
> really, really, really miserable -
> measured in *seconds* - and adding in packet aggregation naively is
> what messed up Wifi for the
> past decade. Please lose 8 minutes of your life to this (hilarious)
> explanation of why aggregation can be bad.
>
> https://www.youtube.com/watch?v=Rb-UnHDw02o&t=1560s
>

Nice video and really instructive :-)

> So given a choice between being able to drive the modem at the maximum
> rate in a testbed...
> or having it behave well at all possible (and highly variable) egress
> rates, I would so love for more to focus on the latter problem than
> the former, at whatever levels and layers in the stack it takes.
>

I get your point, but here it's not just a testbed issue, since I
think that the huge tx drop due to a concurrent rx can happen also in
real life scenarios.

Additionally, it seems that Qualcomm modems are meant to be used in
this way: as far as I know all QC downstream kernel versions have this
kind of feature in the rmnet code.

I think that this can be seen as adding one more choice for the user:
by default tx aggregation in rmnet would be disabled, so no one should
notice this change and suffer from latencies different than the ones
the current rmnet driver already has.

But for those that are affected by the same bug I'm facing or are
interested in a different use-case in which tx aggregation makes
sense, this feature can help.

Hope that this makes sense.

> As a test, what happens on the flent "rrul" test, before and after
> this patch? Under good wireless conditions, and bad?
>
> flent -H server -t my-test-conditions -x --socket-stats rrul
> flent -H server -t my-test-conditions -x --socket-stats
> --test-parameter=upload_streams=4 tcp_nup
>
> I have servers for that all over the world
> {de,london,fremont,dallas,singapore,toronto,}.starlink.taht.net
>

I've uploaded some results at
https://drive.google.com/drive/folders/1-HjhyJaN4oWRNv8P8C__KD9-V-IoBwbL?usp=sharing

The good network condition has been simulated through a callbox
connected to LAN (there are also a few pictures of the throughput on
the callbox side while performing the tests with tx aggregation
enabled/disabled).

Thanks,
Daniele

> > The maximum number of allowed packets in a single URB and the maximum
> > size of the URB are dictated by the modem through the qmi control
> > protocol: the values returned by the modem are then configured in the
> > driver with the new ethtool parameters.
> >
> > > Isn't this the same as TX copybreak? TX
> > > copybreak for multiple packets?
> >
> > I tried looking at how tx copybreak works to understand your comment,
> > but I could not find any useful document. Probably my fault, but can
> > you please point me to something I can read?
> >
> > Thanks,
> > Daniele
>
>
>
> --
> This song goes out to all the folk that thought Stadia would work:
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ