lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAP7ucK7EeBPJHt9XFp7bd5cGXtH5w2VGgh3yD7OA9SYd5JkJw@mail.gmail.com>
Date:   Thu, 5 Aug 2021 22:32:45 +0200
From:   Aleksander Morgado <aleksander@...ksander.es>
To:     Subash Abhinov Kasiviswanathan <subashab@...eaurora.org>
Cc:     Bjørn Mork <bjorn@...k.no>,
        Daniele Palmas <dnlplm@...il.com>,
        Network Development <netdev@...r.kernel.org>,
        stranche@...eaurora.org
Subject: Re: RMNET QMAP data aggregation with size greater than 16384

Hey Subash,

> > I'm playing with the whole QMAP data aggregation setup with a USB
> > connected Fibocom FM150-AE module (SDX55).
> > See https://gitlab.freedesktop.org/mobile-broadband/libqmi/-/issues/71
> > for some details on how I tested all this.
> >
> > This module reports a "Downlink Data Aggregation Max Size" of 32768
> > via the "QMI WDA Get Data Format" request/response, and therefore I
> > configured the MTU of the master wwan0 interface with that same value
> > (while in 802.3 mode, before switching to raw-ip and enabling
> > qmap-pass-through in qmi_wwan).
> >
> > When attempting to create a new link using netlink, the operation
> > fails with -EINVAL, and following the code path in the kernel driver,
> > it looks like there is a check in rmnet_vnd_change_mtu() where the
> > master interface MTU is checked against the RMNET_MAX_PACKET_SIZE
> > value, defined as 16384.
> >
> > If I setup the master interface with MTU 16384 before creating the
> > links with netlink, there's no error reported anywhere. The FM150
> > module crashes as soon as I connect it with data aggregation enabled,
> > but that's a different story...
> >
> > Is this limitation imposed by the RMNET_MAX_PACKET_SIZE value still a
> > valid one in this case? Should changing the max packet size to 32768
> > be a reasonable approach? Am I doing something wrong? :)
> >
> > This previous discussion for the qmi_wwan add_mux/del_mux case is
> > relevant:
> > https://patchwork.ozlabs.org/project/netdev/patch/20200909091302.20992-1-dnlplm@gmail.com/..
> > The suggested patch was not included yet in the qmi_wwan driver and
> > therefore the user still needs to manually configure the MTU of the
> > master interface before setting up all the links, but at least there
> > seems to be no maximum hardcoded limit.
> >
> > Cheers!
>
> Hi Aleksander
>
> The downlink data aggregation size shouldn't affect the MTU.
> MTU applies for uplink only and there is no correlation with the
> downlink path.
> Ideally, you should be able to use standard 1500 bytes (+ additional
> size for MAP header)
> for the master device. Is there some specific network which is using
> greater than 1500 for the IP packet itself in uplink.
>

I may be mistaken then in how this should be setup when using rmnet.
For the qmi_wwan case using add_mux/del_mux (Daniele correct me if
wrong!), we do need to configure the MTU of the master interface to be
equal to the aggregation data size reported via QMI WDA before
creating any mux link; see
http://paldan.altervista.org/linux-qmap-qmi_wwan-multiple-pdn-setup/

I ended up doing the same here for the rmnet case; but if it's not
needed I can definitely change that. I do recall that I originally had
left the master MTU untouched in the rmnet case and users had issues,
and increasing it to the aggregation size solved that; I assume that's
because the MTU should have been increased to accommodate the extra
MAP header as you said. How much more size does it need on top of the
1500 bytes?

-- 
Aleksander
https://aleksander.es

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ