[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210129170129.0a4a682a@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
Date: Fri, 29 Jan 2021 17:01:29 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Loic Poulain <loic.poulain@...aro.org>
Cc: davem@...emloft.net, netdev@...r.kernel.org,
Willem de Bruijn <willemdebruijn.kernel@...il.com>
Subject: Re: [PATCH net-next] net: mhi-net: Add de-aggeration support
On Mon, 25 Jan 2021 16:45:57 +0100 Loic Poulain wrote:
> When device side MTU is larger than host side MRU, the packets
> (typically rmnet packets) are split over multiple MHI transfers.
> In that case, fragments must be re-aggregated to recover the packet
> before forwarding to upper layer.
>
> A fragmented packet result in -EOVERFLOW MHI transaction status for
> each of its fragments, except the final one. Such transfer was
> previoulsy considered as error and fragments were simply dropped.
>
> This patch implements the aggregation mechanism allowing to recover
> the initial packet. It also prints a warning (once) since this behavior
> usually comes from a misconfiguration of the device (modem).
>
> Signed-off-by: Loic Poulain <loic.poulain@...aro.org>
> +static struct sk_buff *mhi_net_skb_append(struct mhi_device *mhi_dev,
> + struct sk_buff *skb1,
> + struct sk_buff *skb2)
> +{
> + struct sk_buff *new_skb;
> +
> + /* This is the first fragment */
> + if (!skb1)
> + return skb2;
> +
> + /* Expand packet */
> + new_skb = skb_copy_expand(skb1, 0, skb2->len, GFP_ATOMIC);
> + dev_kfree_skb_any(skb1);
> + if (!new_skb)
> + return skb2;
I don't get it, if you failed to grow the skb you'll return the next
fragment to the caller? So the frame just lost all of its data up to
where skb2 started? The entire fragment "train" should probably be
dropped at this point.
I think you can just hang the skbs off skb_shinfo(p)->frag_list.
Willem - is it legal to feed frag_listed skbs into netif_rx()?
> + /* Append to expanded packet */
> + memcpy(skb_put(new_skb, skb2->len), skb2->data, skb2->len);
> +
> + /* free appended skb */
> + dev_kfree_skb_any(skb2);
> +
> + return new_skb;
> +}
> +
> static void mhi_net_dl_callback(struct mhi_device *mhi_dev,
> struct mhi_result *mhi_res)
> {
> @@ -143,19 +169,44 @@ static void mhi_net_dl_callback(struct mhi_device *mhi_dev,
> remaining = atomic_dec_return(&mhi_netdev->stats.rx_queued);
>
> if (unlikely(mhi_res->transaction_status)) {
> - dev_kfree_skb_any(skb);
> -
> - /* MHI layer stopping/resetting the DL channel */
> - if (mhi_res->transaction_status == -ENOTCONN)
> + switch (mhi_res->transaction_status) {
> + case -EOVERFLOW:
> + /* Packet can not fit in one MHI buffer and has been
> + * split over multiple MHI transfers, do re-aggregation.
> + * That usually means the device side MTU is larger than
> + * the host side MTU/MRU. Since this is not optimal,
> + * print a warning (once).
> + */
> + netdev_warn_once(mhi_netdev->ndev,
> + "Fragmented packets received, fix MTU?\n");
> + skb_put(skb, mhi_res->bytes_xferd);
> + mhi_netdev->skbagg = mhi_net_skb_append(mhi_dev,
> + mhi_netdev->skbagg,
> + skb);
> + break;
> + case -ENOTCONN:
> + /* MHI layer stopping/resetting the DL channel */
> + dev_kfree_skb_any(skb);
> return;
> -
> - u64_stats_update_begin(&mhi_netdev->stats.rx_syncp);
> - u64_stats_inc(&mhi_netdev->stats.rx_errors);
> - u64_stats_update_end(&mhi_netdev->stats.rx_syncp);
> + default:
> + /* Unknown error, simply drop */
> + dev_kfree_skb_any(skb);
> + u64_stats_update_begin(&mhi_netdev->stats.rx_syncp);
> + u64_stats_inc(&mhi_netdev->stats.rx_errors);
> + u64_stats_update_end(&mhi_netdev->stats.rx_syncp);
> + }
> } else {
> + skb_put(skb, mhi_res->bytes_xferd);
> +
> + if (mhi_netdev->skbagg) {
> + /* Aggregate the final fragment */
> + skb = mhi_net_skb_append(mhi_dev, mhi_netdev->skbagg, skb);
> + mhi_netdev->skbagg = NULL;
> + }
> +
> u64_stats_update_begin(&mhi_netdev->stats.rx_syncp);
> u64_stats_inc(&mhi_netdev->stats.rx_packets);
> - u64_stats_add(&mhi_netdev->stats.rx_bytes, mhi_res->bytes_xferd);
> + u64_stats_add(&mhi_netdev->stats.rx_bytes, skb->len);
> u64_stats_update_end(&mhi_netdev->stats.rx_syncp);
>
> switch (skb->data[0] & 0xf0) {
Powered by blists - more mailing lists